Archive for the ‘Machine Learning’ Category

Deep learning based analysis of microstructured materials for thermal radiation control | Scientific Reports – Nature.com

Raman, A. P., Anoma, M. A., Zhu, L., Rephaeli, E. & Fan, S. Passive radiative cooling below ambient air temperature under direct sunlight. Nature 515, 540544 (2014).

ADS CAS PubMed Article Google Scholar

Zhai, Y. et al. Scalable-manufactured randomized glass-polymer hybrid metamaterial for daytime radiative cooling. Science 355, 10621066 (2017).

ADS CAS PubMed Article Google Scholar

Li, P. et al. Large-scale nanophotonic solar selective absorbers for high-efficiency solar thermal energy conversion. Adv. Mater. 27, 45854591 (2015).

CAS PubMed Article Google Scholar

Kumar, R. & Rosen, M. A. Thermal performance of integrated collector storage solar water heater with corrugated absorber surface. Appl. Therm. Eng. 30, 17641768 (2010).

Article Google Scholar

Planck, M. The Theory of Heat Radiation (P. Blakinstons Son & Co., 1914).

MATH Google Scholar

Zhu, J., Hsu, C. M., Yu, Z., Fan, S. & Cui, Y. Nanodome solar cells with efficient light management and self-cleaning. Nano Lett. 10, 19791984 (2010).

ADS CAS PubMed Article Google Scholar

Zhou, L., Yu, X. & Zhu, J. Metal-core/semiconductor-shell nanocones for broadband solar absorption enhancement. Nano Lett. 14, 10931098 (2014).

ADS CAS PubMed Article Google Scholar

Lee, B. J., Chen, Y. B., Han, S., Chiu, F. C. & Lee, H. J. Wavelength-selective solar thermal absorber with two-dimensional nickel gratings. J. Heat Transfer 136, 17 (2014).

Google Scholar

Yin, X., Yang, R., Tan, G. & Fan, S. Terrestrial radiative cooling: Using the cold universe as a renewable and sustainable energy source. Science 370, 786791 (2020).

ADS CAS PubMed Article Google Scholar

Nie, X. et al. Cool white polymer coatings based on glass bubbles for buildings. Sci. Rep. 10, 110 (2020).

ADS Article CAS Google Scholar

Mandal, J. et al. Hierarchically porous polymer coatings for highly efficient passive daytime radiative cooling. Science 362, 315319 (2018).

ADS CAS PubMed Article Google Scholar

Zhang, H. et al. Biologically inspired flexible photonic films for efficient passive radiative cooling. Proc. Natl. Acad. Sci. 117, 202001802 (2020).

Google Scholar

Krishna, A. et al. Ultraviolet to mid-infrared emissivity control by mechanically reconfigurable graphene. Nano Lett. 19, 50865092 (2019).

ADS CAS PubMed Article Google Scholar

Sala-Casanovas, M., Krishna, A., Yu, Z. & Lee, J. Bio-inspired stretchable selective emitters based on corrugated nickel for personal thermal management. Nanoscale Microscale Thermophys. Eng. 23, 173187 (2019).

ADS CAS Article Google Scholar

Sullivan, J., Yu, Z. & Lee, J. Optical analysis and optimization of micropyramid texture for thermal radiation control. Nanoscale Microscale Thermophys. Eng. https://doi.org/10.1080/15567265.2021.1958960 (2021).

Article Google Scholar

Campbell, P. & Green, M. A. Light trapping properties of pyramidally textured surfaces. J. Appl. Phys. 62, 243249 (1987).

ADS Article Google Scholar

Leon, J. J. D., Hiszpanski, A. M., Bond, T. C. & Kuntz, J. D. Design rules for tailoring antireflection properties of hierarchical optical structures. Adv. Opt. Mater. 5, 18 (2017).

Google Scholar

Zhang, T. et al. Black silicon with self-cleaning surface prepared by wetting processes. Nanoscale Res. Lett. 8, 15 (2013).

ADS CAS Article Google Scholar

Liu, Y. et al. Hierarchical robust textured structures for large scale self-cleaning black silicon solar cells. Nano Energy 3, 127133 (2014).

CAS Article Google Scholar

Dimitrov, D. Z. & Du, C. H. Crystalline silicon solar cells with micro/nano texture. Appl. Surf. Sci. 266, 14 (2013).

ADS CAS Article Google Scholar

Peter Amalathas, A. & Alkaisi, M. M. Efficient light trapping nanopyramid structures for solar cells patterned using UV nanoimprint lithography. Mater. Sci. Semicond. Process. 57, 5458 (2017).

Article CAS Google Scholar

Mavrokefalos, A., Han, S. E., Yerci, S., Branham, M. S. & Chen, G. Efficient light trapping in inverted nanopyramid thin crystalline silicon membranes for solar cell applications. Nano Lett. 12, 27922796 (2012).

ADS CAS PubMed Article Google Scholar

Rahman, T., Navarro-Ca, M. & Fobelets, K. High density micro-pyramids with silicon nanowire array for photovoltaic applications. Nanotechnology 25, 485202 (2014).

PubMed Article CAS Google Scholar

Singh, P. et al. Fabrication of vertical silicon nanowire arrays on three-dimensional micro-pyramid-based silicon substrate. J. Mater. Sci. 50, 66316641 (2015).

ADS CAS Article Google Scholar

Zhu, J. et al. Optical absorption enhancement in amorphous silicon nanowire and nanocone arrays. Nano Lett. 9, 279282 (2009).

ADS PubMed Article CAS Google Scholar

Wei, W. R. et al. Above-11%-efficiency organic-inorganic hybrid solar cells with omnidirectional harvesting characteristics by employing hierarchical photon-trapping structures. Nano Lett. 13, 36583663 (2013).

ADS CAS PubMed Article Google Scholar

Peng, Y. J., Huang, H. X. & Xie, H. Rapid fabrication of antireflective pyramid structure on polystyrene film used as protective layer of solar cell. Sol. Energy Mater. Sol. Cells 171, 98105 (2017).

CAS Article Google Scholar

Sai, H., Yugami, H., Kanamori, Y. & Hane, K. Solar selective absorbers based on two-dimensional W surface gratings with submicron periods for high-temperature photothermal conversion. Sol. Energy Mater. Sol. Cells 79, 3549 (2003).

CAS Article Google Scholar

Deinega, A., Valuev, I., Potapkin, B. & Lozovik, Y. Minimizing light reflection from dielectric textured surfaces. J. Opt. Soc. Am. A 28, 770 (2011).

ADS Article Google Scholar

Shore, K. A. Numerical methods in photonics, by Andrei V. Lavrinenko, Jesper Laegsgaard, Niles Gregersen, Frank Schmidt, and Thomas Sondergaard. Contemporary Physics vol. 57 (2016).

Malkiel, I. et al. Plasmonic nanostructure design and characterization via deep learning. Light Sci. Appl. 7, 18 (2018).

CAS Article Google Scholar

Bojarski, M. et al. End to End Learning for Self-Driving Cars. 19 (2016).

Hinton, G. et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Process. Mag. 29, 1617 (2012).

Article Google Scholar

Spantideas, S. T., Giannopoulos, A. E., Kapsalis, N. C. & Capsalis, C. N. A deep learning method for modeling the magnetic signature of spacecraft equipment using multiple magnetic dipoles. IEEE Magn. Lett. 12, 15 (2021).

Article Google Scholar

Xiong, Y., Guo, L., Tian, D., Zhang, Y. & Liu, C. Intelligent optimization strategy based on statistical machine learning for spacecraft thermal design. IEEE Access 8, 204268204282 (2020).

Article Google Scholar

Zhang, C. A Statistical Machine Learning Based Modeling and Exploration Framework for Run-Time Cross-Stack Energy Optimization (University of North Carolina at Charlotte, 2013).

Book Google Scholar

Zhu, W. et al. Optimization of the thermophysical properties of the thermal barrier coating materials based on GA-SVR machine learning method: Illustrated with ZrO2doped DyTaO4system. Mater. Res. Express 8, 125503 (2021).

ADS CAS Article Google Scholar

Zhang, T. et al. Machine learning and evolutionary algorithm studies of graphene metamaterials for optimized plasmon-induced transparency. Opt. Express 28, 18899 (2020).

ADS CAS PubMed Article Google Scholar

Li, X., Shu, J., Gu, W. & Gao, L. Deep neural network for plasmonic sensor modeling. Opt. Mater. Express 9, 3857 (2019).

ADS CAS Article Google Scholar

Baxter, J. et al. Plasmonic colours predicted by deep learning. Sci. Rep. 9, 119 (2019).

ADS Article CAS Google Scholar

He, J., He, C., Zheng, C., Wang, Q. & Ye, J. Plasmonic nanoparticle simulations and inverse design using machine learning. Nanoscale 11, 1744417459 (2019).

CAS PubMed Article Google Scholar

Sajedian, I., Kim, J. & Rho, J. Finding the optical properties of plasmonic structures by image processing using a combination of convolutional neural networks and recurrent neural networks. Microsyst. Nanoeng. 5, 18 (2019).

Article Google Scholar

Han, S., Shin, J. H., Jung, P. H., Lee, H. & Lee, B. J. Broadband solar thermal absorber based on optical metamaterials for high-temperature applications. Adv. Opt. Mater. 4, 12651273 (2016).

CAS Article Google Scholar

Seo, J. et al. Design of a broadband solar thermal absorber using a deep neural network and experimental demonstration of its performance. Sci. Rep. 9, 19 (2019).

ADS Article CAS Google Scholar

Nadell, C. C., Huang, B., Malof, J. M. & Padilla, W. J. Deep learning for accelerated all-dielectric metasurface design. Opt. Express 27, 27523 (2019).

ADS CAS PubMed Article Google Scholar

Deppe, T. & Munday, J. Nighttime photovoltaic cells: Electrical power generation by optically coupling with deep space. ACS Photon. https://doi.org/10.1021/acsphotonics.9b00679 (2019).

Article Google Scholar

Ma, W., Cheng, F. & Liu, Y. Deep-learning-enabled on-demand design of chiral metamaterials. ACS Nano 12, 63266334 (2018).

CAS PubMed Article Google Scholar

Li, Y. et al. Self-learning perfect optical chirality via a deep neural network. Phys. Rev. Lett. 123, 16 (2019).

CAS Google Scholar

Balin, I., Garmider, V., Long, Y. & Abdulhalim, I. Training artificial neural network for optimization of nanostructured VO2-based smart window performance. Opt. Express 27, A1030 (2019).

ADS CAS PubMed Article Google Scholar

Elzouka, M., Yang, C., Albert, A., Prasher, R. S. & Lubner, S. D. Interpretable forward and inverse design of particle spectral emissivity using common machine-learning models. Cell Rep. Phys. Sci. 1, 100259 (2020).

Article Google Scholar

Peurifoy, J. et al. Nanophotonic particle simulation and inverse design using artificial neural networks. arXiv 18 (2017). https://doi.org/10.1117/12.2289195.

An, S. et al. A Deep learning approach for objective-driven all-dielectric metasurface design. ACS Photon. 6, 31963207 (2019).

See the rest here:
Deep learning based analysis of microstructured materials for thermal radiation control | Scientific Reports - Nature.com

Is fake data the real deal when training algorithms? – The Guardian

Youre at the wheel of your car but youre exhausted. Your shoulders start to sag, your neck begins to droop, your eyelids slide down. As your head pitches forward, you swerve off the road and speed through a field, crashing into a tree.

But what if your cars monitoring system recognised the tell-tale signs of drowsiness and prompted you to pull off the road and park instead? The European Commission has legislated that from this year, new vehicles be fitted with systems to catch distracted and sleepy drivers to help avert accidents. Now a number of startups are training artificial intelligence systems to recognise the giveaways in our facial expressions and body language.

These companies are taking a novel approach for the field of AI. Instead of filming thousands of real-life drivers falling asleep and feeding that information into a deep-learning model to learn the signs of drowsiness, theyre creating millions of fake human avatars to re-enact the sleepy signals.

Big data defines the field of AI for a reason. To train deep learning algorithms accurately, the models need to have a multitude of data points. That creates problems for a task such as recognising a person falling asleep at the wheel, which would be difficult and time-consuming to film happening in thousands of cars. Instead, companies have begun building virtual datasets.

Synthesis AI and Datagen are two companies using full-body 3D scans, including detailed face scans, and motion data captured by sensors placed all over the body, to gather raw data from real people. This data is fed through algorithms that tweak various dimensions many times over to create millions of 3D representations of humans, resembling characters in a video game, engaging in different behaviours across a variety of simulations.

In the case of someone falling asleep at the wheel, they might film a human performer falling asleep and combine it with motion capture, 3D animations and other techniques used to create video games and animated movies, to build the desired simulation. You can map [the target behaviour] across thousands of different body types, different angles, different lighting, and add variability into the movement as well, says Yashar Behzadi, CEO of Synthesis AI.

Using synthetic data cuts out a lot of the messiness of the more traditional way to train deep learning algorithms. Typically, companies would have to amass a vast collection of real-life footage and low-paid workers would painstakingly label each of the clips. These would be fed into the model, which would learn how to recognise the behaviours.

The big sell for the synthetic data approach is that its quicker and cheaper by a wide margin. But these companies also claim it can help tackle the bias that creates a huge headache for AI developers. Its well documented that some AI facial recognition software is poor at recognising and correctly identifying particular demographic groups. This tends to be because these groups are underrepresented in the training data, meaning the software is more likely to misidentify these people.

Niharika Jain, a software engineer and expert in gender and racial bias in generative machine learning, highlights the notorious example of Nikon Coolpixs blink detection feature, which, because the training data included a majority of white faces, disproportionately judged Asian faces to be blinking. A good driver-monitoring system must avoid misidentifying members of a certain demographic as asleep more often than others, she says.

The typical response to this problem is to gather more data from the underrepresented groups in real-life settings. But companies such as Datagen say this is no longer necessary. The company can simply create more faces from the underrepresented groups, meaning theyll make up a bigger proportion of the final dataset. Real 3D face scan data from thousands of people is whipped up into millions of AI composites. Theres no bias baked into the data; you have full control of the age, gender and ethnicity of the people that youre generating, says Gil Elbaz, co-founder of Datagen. The creepy faces that emerge dont look like real people, but the company claims that theyre similar enough to teach AI systems how to respond to real people in similar scenarios.

There is, however, some debate over whether synthetic data can really eliminate bias. Bernease Herman, a data scientist at the University of Washington eScience Institute, says that although synthetic data can improve the robustness of facial recognition models on underrepresented groups, she does not believe that synthetic data alone can close the gap between the performance on those groups and others. Although the companies sometimes publish academic papers showcasing how their algorithms work, the algorithms themselves are proprietary, so researchers cannot independently evaluate them.

In areas such as virtual reality, as well as robotics, where 3D mapping is important, synthetic data companies argue it could actually be preferable to train AI on simulations, especially as 3D modelling, visual effects and gaming technologies improve. Its only a matter of time until you can create these virtual worlds and train your systems completely in a simulation, says Behzadi.

This kind of thinking is gaining ground in the autonomous vehicle industry, where synthetic data is becoming instrumental in teaching self-driving vehicles AI how to navigate the road. The traditional approach filming hours of driving footage and feeding this into a deep learning model was enough to get cars relatively good at navigating roads. But the issue vexing the industry is how to get cars to reliably handle what are known as edge cases events that are rare enough that they dont appear much in millions of hours of training data. For example, a child or dog running into the road, complicated roadworks or even some traffic cones placed in an unexpected position, which was enough to stump a driverless Waymo vehicle in Arizona in 2021.

With synthetic data, companies can create endless variations of scenarios in virtual worlds that rarely happen in the real world. Instead of waiting millions more miles to accumulate more examples, they can artificially generate as many examples as they need of the edge case for training and testing, says Phil Koopman, associate professor in electrical and computer engineering at Carnegie Mellon University.

AV companies such as Waymo, Cruise and Wayve are increasingly relying on real-life data combined with simulated driving in virtual worlds. Waymo has created a simulated world using AI and sensor data collected from its self-driving vehicles, complete with artificial raindrops and solar glare. It uses this to train vehicles on normal driving situations, as well as the trickier edge cases. In 2021, Waymo told the Verge that it had simulated 15bn miles of driving, versus a mere 20m miles of real driving.

An added benefit to testing autonomous vehicles out in virtual worlds first is minimising the chance of very real accidents. A large reason self-driving is at the forefront of a lot of the synthetic data stuff is fault tolerance, says Herman. A self-driving car making a mistake 1% of the time, or even 0.01% of the time, is probably too much.

In 2017, Volvos self-driving technology, which had been taught how to respond to large North American animals such as deer, was baffled when encountering kangaroos for the first time in Australia. If a simulator doesnt know about kangaroos, no amount of simulation will create one until it is seen in testing and designers figure out how to add it, says Koopman. For Aaron Roth, professor of computer and cognitive science at the University of Pennsylvania, the challenge will be to create synthetic data that is indistinguishable from real data. He thinks it is plausible that were at that point for face data, as computers can now generate photorealistic images of faces. But for a lot of other things, which may or may not include kangaroos I dont think that were there yet.

Excerpt from:
Is fake data the real deal when training algorithms? - The Guardian

Making Mind Reading Possible: Invention Allows Amputees To Control a Robotic Arm With Their Mind – SciTechDaily

Researchers have created a device that can read and decipher brain signals, allowing amputees to control the arm using only their thoughts.

A University of Minnesota research team has made mind-reading possible through the use of electronics and AI.

Researchers at the University of Minnesota Twin Cities have created a system that enables amputees to operate a robotic arm using their brain impulses rather than their muscles. This new technology is more precise and less intrusive than previous methods.

The majority of commercial prosthetic limbs now on the market are controlled by the shoulders or chest using a wire and harness system. More sophisticated models employ sensors to detect small muscle movements in the patients natural limb above the prosthetic. Both options, however, can be difficult for amputees to learn how to use and are sometimes unhelpful.

University of Minnesota Department of Biomedical Engineering Associate Professor Zhi Yang shakes hands with research participant Cameron Slavens, who tested out the researchers robotic arm system. With the help of industry collaborators, the researchers have developed a way to tap into a patients brain signals through a neural chip implanted in the arm, effectively reading the patients mind and opening the door for less invasive alternatives to brain surgeries. Credit: Neuroelectronics Lab, University of Minnesota

The Department of Biomedical Engineering at the University of Minnesota with the help of industrial collaborators has developed a tiny, implantable device that connects to the peripheral nerve in the arm of a person. The technology, when coupled with a robotic arm and an artificial intelligence computer, can detect and decipher brain impulses, enabling upper limb amputees to move the arm only with their thoughts.

The researchers most recent paper was published in the Journal of Neural Engineering, a peer-reviewed scientific journal for the interdisciplinary field of neural engineering.

The University of Minnesota-led teams technology allows research participant Cameron Slavens to move a robotic arm using only his thoughts. Credit: Eve Daniels

Its a lot more intuitive than any commercial system out there, said Jules Anh Tuan Nguyen, a postdoctoral researcher and University of Minnesota Twin Cities biomedical engineering Ph.D. graduate. With other commercial prosthetic systems, when amputees want to move a finger, they dont actually think about moving a finger. Theyre trying to activate the muscles in their arm, since thats what the system reads. Because of that, these systems require a lot of learning and practice. For our technology, because we interpret the nerve signal directly, it knows the patients intention. If they want to move a finger, all they have to do is think about moving that finger.

Nguyen has been working on this research for about 10 years with the University of Minnesotas Department of Biomedical Engineering Associate Professor Zhi Yang and was one of the key developers of the neural chip technology.

When combined with an artificial intelligence computer and the above robotic arm, the University of Minnesota researchers neural chip can read and interpret brain signals, allowing upper limb amputees to control the arm using only their thoughts. Credit: Neuroelectronics Lab, University of Minnesota

The project began in 2012 when Edward Keefer, an industry neuroscientist and CEO of Nerves, Incorporated, approached Yang about creating a nerve implant that could benefit amputees. The pair received funding from the U.S. governments Defense Advanced Research Projects Agency (DARPA) and have since conducted several successful clinical trials with real amputees.

The researchers also worked with the University of Minnesota Technology Commercialization office to form a startup called Fasikla play on the word fascicle which refers to a bundle of nerve fibersto commercialize the technology.

The fact that we can impact real people and one day improve the lives of human patients is really important, Nguyen said. Its fun getting to develop new technologies, but if youre just doing experiments in a lab, it doesnt directly impact anyone. Thats why we want to be at the University of Minnesota, involving ourselves in clinical trials. For the past three or four years, Ive had the privilege of working with several human patients. I can get really emotional when I can help them move their finger or help them do something that they didnt think was possible before.

A big part of what makes the system work so well compared to similar technologies is the incorporation of artificial intelligence, which uses machine learning to help interpret the signals from the nerve.

Artificial intelligence has the tremendous capability to help explain a lot of relationships, Yang said. This technology allows us to record human data, nerve data, accurately. With that kind of nerve data, the AI system can fill in the gaps and determine whats going on. Thats a really big thing, to be able to combine this new chip technology with AI. It can help answer a lot of questions we couldnt answer before.

The technology has benefits not only for amputees but for other patients as well who suffer from neurological disorders and chronic pain. Yang sees a future where invasive brain surgeries will no longer be needed and brain signals can be accessed through the peripheral nerve instead.

Plus, the implantable chip has applications that go beyond medicine.

Right now, the system requires wires that come through the skin to connect to the exterior AI interface and robotic arm. But, if the chip could connect remotely to any computer, it would give humans the ability to control their personal devicesa car or phone, for examplewith their minds.

Some of these things are actually happening. A lot of research is moving from whats in the so-called fantasy category into the scientific category, Yang said. This technology was designed for amputees for sure, but if you talk about its true potential, this could be applicable to all of us.

In addition to Nguyen, Yang, and Keefer, other collaborators on this project include Associate Professor Catherine Qi Zhao and researcher Ming Jiang from the University of Minnesota Department of Computer Science and Engineering; Professor Jonathan Cheng from the University of Texas Southwestern Medical Center; and all group members of Yangs Neuroelectronics Lab in the University of Minnesotas Department of Biomedical Engineering.

Reference: A portable, self-contained neuroprosthetic hand with deep learning-based finger control by Anh Tuan Nguyen, Markus W Drealan, Diu Khue Luu, Ming Jiang, Jian Xu, Jonathan Cheng, Qi Zhao, Edward W Keefer and Zhi Yang, 11 October 2021, Journal of Neural Engineering.DOI: 10.1088/1741-2552/ac2a8d

Read the original here:
Making Mind Reading Possible: Invention Allows Amputees To Control a Robotic Arm With Their Mind - SciTechDaily

Machine Learning on the Trading Desk – Traders Magazine

With Julien Messias, Founder, Head of Research & Development, Quantology Capital Management

Briefly describe your firm, and your own professional background?

Quantology Capital Management is a leading French asset manager specializing in quantitative finance. We manage three listed equity-based strategies; our investment philosophy is focused on capturing outperforming stocks by analyzing investors decision-making processes.

Our aim is to exploit behavioral biases (over/under price reactions on corporate events) in a systematic way, in order to generatealpha. Our trading/R&D desk is composed of four experienced people with engineering and actuarial science backgrounds.

I am a fellow at the French Institute of Actuaries and I run the R&D/trading team at Quantology. Previously I ran vanilla and light exotic equity derivatives trading books at ING Financial Markets.

How does Quantology use machine learning?

The purpose of machine learning at Quantology Capital Management is to improve our strategies in a non-intuitive way, i.e., to test the dependency to new factors or to exhibit new execution (high frequency) patterns.

It is important to note that cleaning the data takes up 80% of data scientists time. This process requires four steps. First, one needs to ensure the data is clean and complete.

Second, the dataset must bedebiased: the informative filtration must be adapted, for which we use exclusively either point-in-time or real-time market data. We create and feed our own databases continuously. The data can be quantitative, which is usually structured, and we have recently added qualitative alternative data, which is usually unstructured. Finally, we must ensure that the data is easily available and readable.

This process enables Quantology Capital Management to exhibit the best proxy of the collective intelligence of the market, which is one of the strong principles that we rely on. For that, the more data, the better. But the more data, the messier as well. It is a perpetual trade-off between the quantity of the data, and its precision.

What are the challenges of implementing AI/ML on a trading desk?

When running a hedge fund, on one hand you must be continually focused on applying new techniques and using new data. On the other hand, a manager must maintain steady investment principles and axioms which are at the heart of success.

That said, you cannot have your whole business from A to Z relying only on ML. One of the most well-known issues is overfitting. This denotes a situation when a model targets particular observations (too much emphasis on the outliers, for example) rather than a general structure based on certain parameters. The recommendations lead to losses being removed consciously or subconsciously by not sufficiently challenging the results.

How can machine learning be a competitive advantage for a hedge fund?

Machine learning is a wonderful basket of tools that can be used to sharpen your trading, which can be a significant competitive advantage.

Today, we notice several initiatives on different avenues. You have the explorers, researchers focused on grabbing more and more data, versus the technicians, people who are working on traditional market data and trying to improve current processes. The latter group evolves in a well-known environment, eager to apply techniques to their traditional structured datasets.

How does Quantology work with technology solutions providers?

The infrastructure complexity has to be handled properly. To achieve that, one must focus on the business relation one creates with the technology solution providers. It takes a lot of time for an asset management firm to deal with such partners, as the consistency, the accuracy and the format of the data has to be constantly challenged. A provider has to be much more than a data vendor it must think as long-term partner interested in its clients success, and it must learn about the feedback from users.

What are future threats to machine learning and artificial intelligence processes?

Quantitative and systematic strategies are commonly criticized for suffering from time-decay, to speak as an option trader. They are challenged as well from a perceived lack of adaptability.

The main drawback of machine learning is how it suffers during non-stable financial markets. It is very challenging to find a strategy that can be an all-road or all-weather, and a strategy that can be sample-independent.

The best way to address and fix this topic is by splitting the database into three sub datasets: one dedicated for training, the second for testing, and the third for validation.

More than the algos themselves, innovation happens in the data storage field with data lakes or data warehouses, which enable researchers to gather data from different sources, as well as different formats of corporate data. The issue with such solutions is the cost of calculation when grabbing the data, as it is raw and not sorted and thus the lack of visibility in the dataset makes it unsuitable for high-frequency decisions.In the near term, all asset managers, from the smallest boutiques to the biggest asset managers, will include standard machine learning tools in their process. Thus, obtaining alpha sources from machine learning will require more and more investment, capabilities and unique sets of data. Having said that, we have noticed recent efforts are less on algos which are getting public sooner andmore on the datasets. The algo can be considered as the engine, the data as the gas: in the long run, which is more expensive? The industry needs to answer that question.

This article first appeared in the Q2 issue of GlobalTrading, a Markets Media Group publication.

Link:
Machine Learning on the Trading Desk - Traders Magazine

Snowflake is trying to bring machine learning to the everyman – TechRadar

Snowflake has set out plans to help democratize access to machine learning (ML) resources by eliminating complexities for non-expert customers.

At its annual user conference, Snowflake Summit, the database company has made a number of announcements designed to facilitate the uptake of machine learning. Chief among them, enhanced support for Python (the language in which many ML products are written) and a new app marketplace that allows partners to monetize their models.

"Our objective is to make it as easy as possible for customers to leverage advanced ML models without having to build from scratch, because that requires a huge amount of expertise," said Tal Shaked, who heads up ML at Snowflake.

"Through projects like Snowflake Marketplace, we want to give customers a way to run these kinds of models against their data, both at scale and in a secure way."

Although machine learning is a decades-old concept, only within the last few years have advances in compute, storage, software and other technologies paved the way for widespread adoption.

And even still, the majority of innovation and expertise is pooled disproportionately among a small minority of companies, like Google and Meta.

The ambition at Snowflake is to open up access to the opportunities available at the cutting edge of machine learning through a partnership- and ecosystem-driven approach.

Shaked, who worked across a range of machine learning projects at Google before joining Snowflake, explained that customers will gain access to the foundational resources, on top of which they can make small optimizations for their specific use cases.

For example, a sophisticated natural language processing (NLP) model developed by the likes of OpenAI could act as the general-purpose foundation for a fast food customer looking to develop an ML-powered ordering system, he suggested. In this scenario, the customer is involved in none of the training and tuning of the underlying model, but still reaps all the benefits of the technology.

More from Snowflake Summit

Theres so much innovation happening within the field of ML and we want to bring that into Snowflake in the form of integrations, he told TechRadar Pro. Its about asking how we can integrate with these providers so our customers can do the fine-tuning without needing to hire a bunch of PhDs.

This sentiment was echoed earlier in the day by Benoit Dageville, co-founder of Snowflake, who spoke about the importance of sharing expertise across the customer and partner ecosystem.

Democratizing ML is an important aspect of what we are trying to do. Were becoming an ML platform, but not just where you built it and use it for yourself; the revolution is in the sharing of expertise.

Its no longer just the Googles and Metas of this world using this technology, because were making it easy to share.

Disclaimer: Our flights and accommodation for Snowflake Summit 2022 were funded by Snowflake, but the organization had no editorial control over the content of this article.

Continue reading here:
Snowflake is trying to bring machine learning to the everyman - TechRadar