Archive for the ‘Artificial Intelligence’ Category

Building explainability into the components of machine-learning models – MIT News

Explanation methods that help users understand and trust machine-learning models often describe how much certain features used in the model contribute to its prediction. For example, if a model predicts a patients risk of developing cardiac disease, a physician might want to know how strongly the patients heart rate data influences that prediction.

But if those features are so complex or convoluted that the user cant understand them, does the explanation method do any good?

MIT researchers are striving to improve the interpretability of features so decision makers will be more comfortable using the outputs of machine-learning models. Drawing on years of field work, they developed a taxonomy to help developers craft features that will be easier for their target audience to understand.

We found that out in the real world, even though we were using state-of-the-art ways of explaining machine-learning models, there is still a lot of confusion stemming from the features, not from the model itself, says Alexandra Zytek, an electrical engineering and computer science PhD student and lead author of a paper introducing the taxonomy.

To build the taxonomy, the researchers defined properties that make features interpretable for five types of users, from artificial intelligence experts to the people affected by a machine-learning models prediction. They also offer instructions for how model creators can transform features into formats that will be easier for a layperson to comprehend.

They hope their work will inspire model builders to consider using interpretable features from the beginning of the development process, rather than trying to work backward and focus on explainability after the fact.

MIT co-authors include Dongyu Liu, a postdoc; visiting professor Laure Berti-quille, research director at IRD; and senior author Kalyan Veeramachaneni, principal research scientist in the Laboratory for Information and Decision Systems (LIDS) and leader of the Data to AI group. They are joined by Ignacio Arnaldo, a principal data scientist at Corelight. The research is published in the June edition of the Association for Computing Machinery Special Interest Group on Knowledge Discovery and Data Minings peer-reviewed Explorations Newsletter.

Real-world lessons

Features are input variables that are fed to machine-learning models; they are usually drawn from the columns in a dataset. Data scientists typically select and handcraft features for the model, and they mainly focus on ensuring features are developed to improve model accuracy, not on whether a decision-maker can understand them, Veeramachaneni explains.

For several years, he and his team have worked with decision makers to identify machine-learning usability challenges. These domain experts, most of whom lack machine-learning knowledge, often dont trust models because they dont understand the features that influence predictions.

For one project, they partnered with clinicians in a hospital ICU who used machine learning to predict the risk a patient will face complications after cardiac surgery. Some features were presented as aggregated values, like the trend of a patients heart rate over time. While features coded this way were model ready (the model could process the data), clinicians didnt understand how they were computed. They would rather see how these aggregated features relate to original values, so they could identify anomalies in a patients heart rate, Liu says.

By contrast, a group of learning scientists preferred features that were aggregated. Instead of having a feature like number of posts a student made on discussion forums they would rather have related features grouped together and labeled with terms they understood, like participation.

With interpretability, one size doesnt fit all. When you go from area to area, there are different needs. And interpretability itself has many levels, Veeramachaneni says.

The idea that one size doesnt fit all is key to the researchers taxonomy. They define properties that can make features more or less interpretable for different decision makers and outline which properties are likely most important to specific users.

For instance, machine-learning developers might focus on having features that are compatible with the model and predictive, meaning they are expected to improve the models performance.

On the other hand, decision makers with no machine-learning experience might be better served by features that are human-worded, meaning they are described in a way that is natural for users, and understandable, meaning they refer to real-world metrics users can reason about.

The taxonomy says, if you are making interpretable features, to what level are they interpretable? You may not need all levels, depending on the type of domain experts you are working with, Zytek says.

Putting interpretability first

The researchers also outline feature engineering techniques a developer can employ to make features more interpretable for a specific audience.

Feature engineering is a process in which data scientists transform data into a format machine-learning models can process, using techniques like aggregating data or normalizing values. Most models also cant process categorical data unless they are converted to a numerical code. These transformations are often nearly impossible for laypeople to unpack.

Creating interpretable features might involve undoing some of that encoding, Zytek says. For instance, a common feature engineering technique organizes spans of data so they all contain the same number of years. To make these features more interpretable, one could group age ranges using human terms, like infant, toddler, child, and teen. Or rather than using a transformed feature like average pulse rate, an interpretable feature might simply be the actual pulse rate data, Liu adds.

In a lot of domains, the tradeoff between interpretable features and model accuracy is actually very small. When we were working with child welfare screeners, for example, we retrained the model using only features that met our definitions for interpretability, and the performance decrease was almost negligible, Zytek says.

Building off this work, the researchers are developing a system that enables a model developer to handle complicated feature transformations in a more efficient manner, to create human-centered explanations for machine-learning models. This new system will also convert algorithms designed to explain model-ready datasets into formats that can be understood by decision makers.

Read more here:
Building explainability into the components of machine-learning models - MIT News

Arm Cortex microprocessor for artificial intelligence (AI), imaging, and audio introduced by Microchip – Military & Aerospace Electronics

CHANDLER, Ariz. Microchip Technology Inc. in Chandler, Ariz., is introducing the SAMA7G54 Arm Cortex A7-based microprocessor that runs as fast as 1 GHz for low-power stereo vision applications with accurate depth perception.

The SAMA7G54 includes a MIPI CSI-2 camera interface and a traditional parallel camera interface for high-performing yet low-power artificial intelligence (AI) solutions that can be deployed at the edge, where power consumption is at a premium.

AI solutions often require advanced imaging and audio capabilities which typically are found only on multi-core microprocessors that also consume much more power.

When coupled with Microchip's MCP16502 Power Management IC (PMIC), this microprocessor enables embedded designers to fine-tune their applications for best power consumption vs. performance, while also optimizing for low overall system cost.

Related: Embedded computing sensor and signal processing meets the SWaP test

The MCP16502 is supported by Microchip's mainline Linux distribution for the SAMA7G54, allowing for easy entry and exit from available low-power modes, as well as support for dynamic voltage and frequency scaling.

For audio applications, the device has audio features such as four I2S digital audio ports, an eight-microphone array interface, an S/PDIF transmitter and receiver, as well as a stereo four-channel audio sample rate converter. It has several microphone inputs for source localization for smart speaker or video conferencing systems.

The SAMA7G54 also integrates Arm TrustZone technology with secure boot, and secure key storage and cryptography with acceleration. The SAMA7G54-EK Evaluation Kit (CPN: EV21H18A) features connectors and expansion headers for easy customization and quick access to embedded features.

For more information contact Microchip online at http://www.microchipdirect.com.

Read the original here:
Arm Cortex microprocessor for artificial intelligence (AI), imaging, and audio introduced by Microchip - Military & Aerospace Electronics

What’s Your Future of Work Path With Artificial Intelligence? – CMSWire

What does the future of artificial intelligence in the workplace look like for employee experience?

Over last few years, artificial intelligence (AI) has become a very significant part of business operations across all industries. Its already making an impact as part of our daily lives, from appliances, voice assistants, search, surveillance, marketing, autonomous vehicles, video games, TVs, to large sporting events.

AI is the result of applying cognitive science techniques to emulate human intellect and artificially create something that performs tasks that only humans can perform, like reasoning, natural communication and problem-solving. It does this by leveraging machine learning technique by reading and analyzing large data sets to identify patterns, detect anomalies and make decisions with no human intervention.

In this ever-evolving market, AI has become super crucial for businesses to upscale workplace infrastructure and improve employee experience. According to Precedence Research, the AI market size is projected to surpass around $1,597.1 billion by 2030, and is expanding growth at a CAGR of 38.1% from 2022 to 2030.

Currently, AI is being used in the workplace to automate jobs that are repetitive or require a high degree of precision, like data entry or analysis. AI can also be used to make predictions about customer behavior or market trends.

In the future, AI is expected to increasingly be used to augment human workers, providing them with recommendations or suggestions based on the data that it has been programmed to analyze.

Todays websites are capable of using AI to quickly detect potential customer intent in real-time based on interactions by the online visitor, and to show more engaging and personalized content to enhance the possibility of converting customers. As AI continues to develop, its capabilities in the workplace are expected to increase, making it an essential tool for businesses looking to stay ahead of the competition.

Kai-Fu Lee, a famous computer scientist, businessman and writer, said in a 2019 interview with CBS News, that he believes 40% of the worlds jobs will be replaced by robots capable of automating tasks.

AI has a potential to replace many types of jobs that involve mechanical or structured tasks that are repetitive in nature. Some opportunities we are seeing now are robotic vehicles, drones, surgical devices, logistics, call centers, administrative tasks like housekeeping, data entry and proofreading. Even armies of robots for security and defense are being discussed.

That said, AI is going to be a huge disruption worldwide over the next decade or so. Most innovations come from disruptions; take COVID-19 pandemic as an example, it dramatically changed how we work now.

While AI takes some jobs, it is also creates many opportunities. When it comes to strategic thinking, creativity, emotions and empathy, humans will always win over machines. This rings the bell to adapt with the change and grow human factors in workplace in all possible dimensions. Nokia and Blackberry mobile phones, Kodak cameras are the living examples of failing by not acknowledging the digital disruption. Timely market research, using the right technology and enabling the workforce to adapt for change can bring success to businesses through digital transformation.

Related Article:What's Next for Artificial Intelligence in Customer Experience?

There will be changes in the traditional means of doing things, and more jobs will be generated. AI has the potential to revolutionize the workplace, transforming how we do everything from customer service to driving cars in one of the busiest places like downtown San Francisco. However, there are still several challenges that need to be overcome before AI can be widely implemented in the workplace.

One of the biggest challenges is developing algorithms that can reliably replicate human tasks. This is often difficult because human tasks often involve common sense and reasoning, which are difficult for computers to understand. We should also ensure that AI systems are fair and unbiased. This is important because AI systems are often used to make decisions about things like hiring and promotions, and if they are biased then this can lead to discrimination. We live in the world of diversity, equity, and inclusion (DEI), and mistakes with AI can be costly for businesses. It may take a very long time to develop a customer-centric model that is completely dependent on AI, one that is reliable and trustworthy.

The future of AI is hard to predict, but there are a few key trends that are likely to shape its development. The increasing availability of data will allow AI systems to become more accurate and efficient, and as businesses and individuals rely on AI more and more, a need for new types of AI applications means more work and jobs. As these trends continue, AI is likely to have a significant impact on the workforce. It can very well lead to the automation of many cognitive tasks, including those that are currently performed by human workers.

This could result in a reduction in the overall demand for labor as well as an increase in the need for workers with skills that complement the AI systems. AI is the future of work; there's no doubt about that, but how it will shape the future of human workforce remains to be seen.

Many are worried that AI will remove many jobs, while others see it as an opportunity to increase efficiency and accuracy in the workforce. No matter which side you're on, it's important to understand how AI is changing the way we work and what that means for the future.

Related Article: 8 Examples of Artificial Intelligence in the Workplace

Let's look at few real-world examples that are already changing the way of work:

All above implementations look great. However, it is important to note that AI should be used as a supplement to human intelligence, not a replacement for it. When used properly, AI can help businesses thrive. The role of AI in the workplace is ever evolving, and it will be interesting to see how businesses adopt these technologies and improve the overall work environment to provide the best employee experience.

AnOctober 2020 Gallup pollfound that 51% of workers are not engaged they are psychologically unattached to their work and company.

Here are some employee experience aspects that AI could improve:

Employees need to know and trust that you have their best interests in mind. The value of AI in human resources is going to be critical to deliver employee experiences along with human connection and values.

Continue reading here:
What's Your Future of Work Path With Artificial Intelligence? - CMSWire

Does Artificial Intelligence Really Have the Potential to Create Transformative Art? – Literary Hub

I. The Situation

In 1896, the Lumiere brothers released a 50-second-long film, The Arrival of a Train at La Ciotat, and a myth was born. The audiences, it was reported, were so entranced by the new illusion that they jumped out of the way as the flickering image steamed towards them.

The urban legend of film-induced mass panic, established well before 1900, illustrated a valid contention if the story was, in fact, untrue: The technology had produced a new emotional reaction. That reaction was hugely powerful but inchoate and inarticulate. Nobody knew what it was doing or where it would go. Nobody had any idea that it would turn into what we call film. Today, the world is in a similar state of bountiful confusion over the creative use of artificial intelligence.

Already the power of the new technology is evident to everyone who has managed to use it. Artificial intelligence can recreate the speaking voice of dead persons. It can produce images from instructions. It can fill in the missing passages from damaged texts. It can imitate any and all literary styles. It can convert any given authorial corpus into logarithmic probability. It can create characters that speak in unpredictable but convincing ways. It can write basic newspaper articles. It can compose adequate melodies. But what any of this means, or to what uses these new abilities will ultimately be turned, are as yet unclear.

There is some fascinating creative work emerging from this primordial ooze of nascent natural language processing (NLP). Vauhini Vatas GPT-based requiem for her sister and the poetry of Sasha Stiles are experiments in the avant garde tradition. (My own NLP-work falls into this category as well, including the short story this essay accompanies.)

Then there are attempts to use AI in more popular media. Dungeon AI, which is an infinitely generated text adventure driven by deep learning, explores the gaming possibilities. Perhaps the most exciting format for NLP is in bot-character generation. Project December allows its users to recreate dead people, to have conversations with them. But theres no need for these generated voices to be based on actual human beings. Lucas Rizzotto concocted a childhood imaginary friend, Magnetron, which existed inside his familys microwave, out of OpenAI and a hundred-page backstory.

These early attempts to find spheres of expression for the new technology are dynamic and exciting, but they remain marginal. This work has not yet resonated with the public, nor has it solidified into coherent practice.

The scattered few of us who use this technology feel its eerie power. The encounter with deep learning is simultaneously ultramodern and ancient, manufacturing an unsettling impression of being recognized by a machine, or of having access, through machines, to a vast human pattern, even a collective unconscious or noosphere. But that sensation has not yet been communicated to audiences. They dont participate in it. They see only the results, the words on the page, which are little more than aftereffects.

The literary world tends to engage creative technology with either petulant resistance or slavish adulation. Neither are particularly useful. A novel about social media is still considered surprisingly innovative, and even the smartphone rarely makes an appearance in literary fiction.

Recent novels about artificial intelligence, such as Klara and the Sun by Kazuo Ishiguro or Machines Like Me by Ian McEwan, have absolutely nothing to do with actual artificial intelligence as it currently exists or will exist in the foreseeable future. They are, frankly, embarrassingly lazy on the subject.

Meanwhile, the hacker aesthetic has had its basic fraud exposed: it fantasized technologists as rebel outsiders, poised to make the world a better place, as a cover for monopolists who need excuses to justify their hunger for total impunity.

Both the resistance and the adulation are stupid, and so we find ourselves toxically ill-prepared for the moment we are facing: the intrusion of technology into the creative process. The machines are no longer lurking on the periphery; they are entering the temple, piercing the creative act itself.

The Lumiere brothers produced roughly 1,400 minute-length films, or views as they were called at the time, but nobody could see what these views would blossom into: A Trip to the Moon, and Birth of a Nation, and Citizen Kane, and Vertigo, and Apocalypse Now. Creative AI is not a new technique. It is an entirely new artistic medium. It needs to be developed as such. The question facing the small band of creators using artificial intelligence today is how we get from The Arrival of a Train at La Ciotat to Citizen Kane.

II. The Direction

One thing is certain: Nobody needs machines to make shitty poetry. Humans make quite enough of that already. The blossoming of AI art into its unique and particular reality will demand a unique and particular practice, one that sheds traditional categories of art as they currently exist and which engages audiences in ways they have never been engaged before.

One potential danger, at least in the short term, is that the technology is advancing so quickly it is unclear whether any artistic practice that emerges from it will have time to mature before it becomes obsolete.

Every example of creative AI I have listed above uses GPT-3 (enerative Pre-trained Transformer 3). But Google just very recently released its own Transformer-based large language model, PaLM, which promises low-level reasoning functions. What does that mean? What can be built from that new function? Art requires technical mastery, and also conscious transcendence of technical mastery. Even keeping up with the latest AI developments, never mind getting access to the tech, is a full-time job. And art that does nothing more than show off the power of a machine isnt doing its job.

Then there is the question of whether anyone wants computer-generated art. One of the somewhat confounding aspects of the internet generally is that it is hugely creative but fundamentally resistant to art, or at least to anything that identifies itself as art. TikTok has turned into a venue of explosive creativity but there is no Martin Scorcese of TikTok, nor could there ever be. Internet-specific genres, like Vine, are inherently ephemeral and impersonal. They arent art forms so much as widespread crafting activities, like Victorian-era collages, or Japanese Chigiri-e, or Ukrainian pysanky.

When people want to read consciously made, individually controlled language, they tend to pick up physically printed books, as ridiculous as that sounds. Creators follow the audiences. The top ten novels published this year are not fundamentally different, in their modes of composition, dissemination and consumption, from the novels of the 1950s.

But the resistance creative AI faces, both from artists and from audiences, is a sign of the power and potential of the new medium. The most exciting promise of creative AI is that it runs in complete opposition to the overarching value that defines contemporary art: Identity. The practice itself removes identity from the equation.

Since so few people have used this technology, Im afraid Ill have to use the short story that accompanies this essay as an example, although, to be clear, many people are using this tech in completely different ways and my own approach is representative of nothing but my own fascinations and capacities.

A few months ago, I received access to the product of a Canadian AI company called Cohere, which allows for sophisticated, nimble manipulations of Natural Language Processing. Through Cohere, I was able to create algorithms derived from various styles. These included Thomas Browne, Eileen Chang, Dickens, Shakespeare, Chekhov, Hemingway and others, including anthologies of love stories and Chinese nature poetry.

I then took those algorithms and had them write sentences and paragraphs for me on selected themes: a marketplace, love at first sight, a life played out after falling in love. The ones I liked I kept. The ones I didnt I threw out. Then I took the passages those algorithms had provided and input them to Sudowrite, the stochastic writing tool. Sudowrite generated texts on the basis of the prompts the other algorithms had generated.

To generate Autotuned Love Story I had to develop a separate artistic practice around the technology. Im not proposing my practice as a model; in fact, now that Ive done it, I dont see why anyone else would do what Ive done. My point is that what I created here and how I created here is distinct from traditional artistic creation.

The love story below is my attempt to develop an idealized love story out of all the love stories that I have admired. It exists on the line between art and criticism. Autotuned Love Story certainly isnt mine. I built it but its not my love story. Its the love story of the machines interacting with all the love stories I have loved. I confess that I find it eerie; there is something true and moving in it that I recognize but which I also cant place.

Creative AI is not an expression of a self. Rather it is the permutation and recombination and reframing of other identities. It is not, nor will it be, nor can it be, a representation of a generation or a race or a time. It is not a voice. Whatever voice is, it is the opposite. The process of using creative AI is literally derivative. The power of creative AI is its strange mixture of human and other. The revelation of the medium will be the exploitation of that fact.

Because creative AI is not self-expression, its development will be different from other media. On that basis, two propositions:

Artists should not use artificial intelligence to make art that people could make otherwise.

The display of technology cannot be the purpose of the art.

Creative AI should, above all, be itself and not something else. And secondly it should allow users to forget that its artificial intelligence altogether. Otherwise it will be little more than advertising for the tech, or an alibi for the artist.

Fortunately, there is a predecessor that can serve as a model, and which follows the two directions above: Hip hop. Hip hop was an art form determined, from its inception, by technological innovation. Kool Herc invented the two-turntable setup that allowed the isolation of the break, and Grandmaster Flash developed backspin, punch phrasing, and scratching. These developments required enormous technical facility but also a concentration on effects. The artists shaped the tech in response to audience reactions.

Hip hop also demanded an entirely new musicality to maximize the effects of the innovation. Building beats and sampling required a comprehensive musical knowledge. The best DJs had the widest access to music of all kinds, and were each, in a sense, archivists. They engaged in raids on the past, using history for their own purposes.

Just as hip hop artists developed a consummate familiarity with earlier forms of popular music, the artists of artificial intelligence who use large language models will need to understand the history of the sentence and the development of literary style in all forms and across all genres. Linguistic AI will demand the skills of close reading and a historical breadth as the basic terms of creation.

And when we look at the bad AI art available now the failings of the art are almost never technical. Its usually a failure to possess deep knowledge, or sometimes any knowledge, of narrative technique or poesis.

In its early years, hip hop had a defiance and a focus on effect which AI art should aspire to. They showed a willingness and capacity to create and abandon values. They did not worship their instruments. They concentrated on the results, and that spirit largely survives. A good question to ask as a rough guide to the creative direction of AI art: What would Ye do? WWYD?

III. The Stakes

Creative AI promises more powerful illusions and more all-consuming worlds. Eric Schmidt, at The Atlantic, recently offered an example of the future awaiting us:

If you imagine a child born today, you give the child a baby toy or a bear, and that bear is AI-enabled. And every year the child gets a better toy. Every year the bear gets smarter, and in a decade, the child and the bear who are best friends are watching television and the bear says, I dont really like this television show. And the kid says, Yeah, I agree with you.

Despite this terrifying promise, AI art will probably remain small and marginal in the short term, just as film was for several decades after its birth.

The development of creative AI is much, much more important than how cool the new short stories or interactive games can be. For one thing, artistic practice may serve as a desperately needed bridge between artificial intelligence and the humanities. As it stands, those who understand literature and history dont understand the technology that is about to transform the framework of language, and those who are building the technology that is revolutionizing language dont understand literature or history.

Also, the political uses of artificial intelligence will follow creative practices. Thats certainly what happened with film. A few decades after the The Arrival of a Train at La Ciotat, Lenin was using film as the primary propaganda method of the Soviet Union, and the proto-fascist Gabriele DAnnunzio filmed his triumphal entrance into the city of Fiume. Whatever forms creative AI takes will, almost immediately, be used to manipulate and control mass audiences.

Creative AI is a confrontation with the fact that an unknown number of aspects of art, so vital to our sense of human freedom, can be reduced to algorithms, to a series of external instructions. Marovecs paradoxthat the more complex and high-level a task, the easier it is to computeis fully at play. Capacities requiring a lifetime of dedication to master, like a personal literary style, can simply be programmed. The basic things remain mysteries. What makes an image powerful? What makes a story compelling? The computers have no answers to these questions.

There is a line thrusting through the world and ourselves dividing what is computable from what is not. It is a line driving straight into the heart of the mystery of humanity. AI art will ride on this line.

_________________________________________________

[This story was generated by means of natural language processing, using Cohere AI and Sudowrite accessing GPT-3.]

The rain in the market smelled like rusting metal and wet stones. The stallholders had no real need to sell nor did they care much for their customers. There was a cookery demonstration. There was a magician. There was a video games stall. There was a beauty parlour. The rain was like a mist at first, fine and barely noticeable, but not long after the streets were flowing with a torrent of mud and water.

Among huddles of people, they met in a stall that sold umbrellas. The eyes of one were large and green, soft and milky. The others eyes were like iced coffee.

Shyness came upon them at once. Shyness and fear. A butchers boy, with a beautiful nose, stood beside a post, making grimaces at a plan that was chalked out on the top of it. A ragged little boy, barefooted, and with his face smeared with blood, from having just grazed his nose against the corner of a post, began playing at marbles with other boys of his own size. Their smiles were interminable, wavering and forgetful, and it seemed as though they could not control their lips, that they smiled against their will while they thought of something else.

Alone?

Yes.

The rain became like a dirty great mop being wrung out above their heads. The market became more uneasy, and gave place to a sea of noises that on both sides added to the general clamour. The crowd began to press in on them, to snatch at their coats, to groan, to criticize and to complain of cold and hunger, of want of clean clothes, of lack of decent shelter. The rain was unremittingjust like the flow of people, the flow of traffic, the flow of tired animals. The crowd erupted and all at once it seemed that there were too many people.

When the crowd closed up again, the two were separated from one another. The rain died down and the market was now very different. They looked for each other like lost children in a train station. It was a different kind of a market, darker, older, dingier, more chaotic. The pavement was covered with mud and mire and straw and dung.

They met by accident, which is only a way of saying that we have not looked for something before it comes forward, that they were both in the world and the world is small.

*

They never met again, or maybe they did.

Maybe, at first, they had the same delight in touching, in meeting, in forming, in blurring, in drawing out. They had secrets, and they shared those secrets. As ones hands rolled over the other, they lay as still as fish. It seemed to both of them that they could not live in the old way; they could not go on living as though there were nothing new in their lives. They had to settle down together somewhere, to live for themselves, alone, to have their own home, where they would be their own masters. They went abroad, changed their lives. One was a manager of a railway branch line. The other became a teacher in a school. And the large study in which they spent their evenings was so full of pictures and flowers that it was difficult to move about without upsetting something. Pictures of all sorts, landscapes in water-colour, engravings after the old masters, and the albums filled with the photographs of relatives, friends, and children, were scattered everywhere about the bookcases, on the tables, on the chairs. Love is like money: the kind you have and do not want to lose, the kind you lose and treasure. The thought of death, which had moved them so profoundly, no longer caused in either the former fear and remorse, a sound that lost its echo in the endless, sad retreat, a phantom of caresses down hallways empty and forsaken.

Maybe they lived that life. Maybe they didnt. But in the market, among the detritus, the splintered edges, they had once found each other, and found each other and lost each other again. They had said only that, yes, they were alone.

The rain had smelled like sodden horses and rusting metal and wet stones.

Read more:
Does Artificial Intelligence Really Have the Potential to Create Transformative Art? - Literary Hub

How artificial intelligence is boosting crop yield to feed the world – Freethink

Over the last several decades, genetic research has seen incredible advances in gene sequencing technologies. In 2004, scientists completed the Human Genome Project, an ambitious project to sequence the human genome, which cost $3 billion and took 10 years. Now, a person can get their genome sequenced for less than $1,000 and within about 24 hours.

Scientists capitalized on these advances by sequencing everything from the elusive giant squid to the Ethiopian eggplant. With this technology came promises of miraculous breakthroughs: all diseases would be cured and world hunger would be a thing of the past.

So, where are these miracles?

We need about 60 to 70% more food production by 2050.

In 2015, a group of researchers founded Yield10 Bioscience, an agriculture biotech company that aimed to use artificial intelligence to start making those promises into reality.

Two things drove the development of Yield10 Bioscience.

One, obviously, [the need for] global food security: we need about 60 to 70% more food production by 2050, explained Dr. Oliver Peoples, CEO of Yield10 Bioscience, in an interview with Freethink. And then, of course, CRISPR.

It turns out that having the tools to sequence DNA is only step one of manufacturing the miracles we were promised.

The second step is figuring out what a sequence of DNA actually does. In other words, its one thing to discover a gene, and it is another thing entirely to discover a genes role in a specific organism.

In order to do this, scientists manipulate the gene: delete it from an organism and see what functions are lost, or add it to an organism and see what is gained. During the early genetics revolution, although scientists had tools to easily and accurately sequence DNA, their tools to manipulate DNA were labor-intensive and cumbersome.

Its one thing to discover a gene, and it is another thing entirely to discover a genes role in a specific organism.

Around 2012, CRISPR technology burst onto the scene, and it changed everything. Scientists had been investigating CRISPR a system that evolved in bacteria to fight off viruses since the 80s, but it took 30 years for them to finally understand how they could use it to edit genes in any organism.

Suddenly, scientists had a powerful tool that could easily manipulate genomes. Equipped with DNA sequencing and editing tools, scientists could complete studies that once took years or even decades in mere months.

Promises of miracles poured back in, with renewed vigor: CRISPR would eliminate genetic disorders and feed the world! But of course, there is yet another step: figuring out which genes to edit.

Over the last couple of decades, researchers have compiled databases of millions of genes. For example, GenBank, the National Institute of Healths (NIH) genetic sequence database, contains 38,086,233 genes, of which only tens of thousands have some functional information.

For example, ARGOS is a gene involved in plant growth. Consequently, it is a very well-studied gene. Scientists found that genetically engineering Arabidopsis, a fast-growing plant commonly used to study plant biology, to express lots of ARGOS made the plant grow faster.

Dozens of other plants have ARGOS (or at least genes very similar to it), such as pineapple, radish, and winter squash. Those plants, however, are hard to genetically manipulate compared to Arabidopsis. Thus, ARGOSs function in crops in general hasnt been as well studied.

The big crop companies are struggling to figure out what to do with CRISPR.

CRISPR suddenly changed the landscape for small groups of researchers hoping to innovate in agriculture. It was an affordable technology that anyone could use but no one knew what to do with it. Even the largest research corporations in the world dont have the resources to test all the genes that have been identified.

I think if you talk to all the big crop companies, theyve all got big investments in CRISPR. And I think theyre all struggling with the same question, which is, This is a great tool. What do I do with it? said Dr. Peoples.

The algorithm can identify genes that act at a fundamental level in crop metabolism.

The holy grail of crop science, according to Dr. Peoples, would be a tool that could identify three or four genetic changes that would double crop production for whatever youre growing.

With CRISPR, those changes could be made right now. However, there needs to be a way to identify those changes, and that information is buried in the massive databases.

To develop the tool that can dig them out, Dr. Peoples team merged artificial intelligence with synthetic biology, a field of science that involves redesigning organisms to have useful new abilities, such as increasing crop yield or bioplastic production.

This union created Gene Ranking Artificial Intelligence Network (GRAIN), an algorithm that evaluates scientific databases like GenBank and identifies genes that act at a fundamental level in crop metabolism.

That fundamental level aspect is one of the keys to GRAINs long-term success. It identifies genes that are common across multiple crop types, so when a powerful gene is identified, it can be used across multiple crop types.

For example, using the GRAIN platform, Dr. Peoples and his team identified four genes that may significantly impact seed oil content in Camelina, a plant similar to rapeseed (true canola oil). When the researchers increased the activity of just one of those genes via CRISPR, the plants had a 10% increase in seed oil content.

Its not quite a miracle yet, but with more advances in gene editing and AI happening all the time, the promises of the genetic revolution are finally starting to pay off.

Wed love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us attips@freethink.com.

View post:
How artificial intelligence is boosting crop yield to feed the world - Freethink