Media Search:



Is quantum computing about to change the world? – BroadbandDeals

Quantum computing potential extends beyond simply processing things faster, offering scope to create entire new consumer services and product offerings

Neil Cumins Thursday, 17 June, 2021

Its common for new technologies to be treated with a healthy degree of scepticism when theyre first unveiled.

From the internet to social media, it often takes a while for potential to become reality.

Today, theres excitable talk about the blockchains potential, or how light-powered LiFi may supplant WiFi in the nations homes. Talk, but not much action as yet.

Quantum computing potential may be unmatched in terms of transforming our world even more so than the Internet of Things, or fully automated robotics.

And while you dont need a degree in quantum physics to understand quantum computing, its important to appreciate the basics of this highly complex (and unstable) technology.

Regardless of what theyre being asked to do, electronic devices only understand binary inputs. Zero or one, on or off. Thats it.

Every FIFA tournament, CAD package, Netflix marathon and email is composed of immense strings of zeroes and ones the binary data bits computers can process and interpret.

Quantum computing potentially subverts this by allowing bits to be both zeroes and ones at the same time.

This status fluidity involves holding data in whats called a superposition state a coin spinning on its side rather than landing heads-up or tails-up.

Superpositions grant a single bit far more potential, offering exponentially more processing power than a modern (classical) computer can deliver.

Quantum computers are theoretically capable of achieving feats todays hardware couldnt manage in a hundred lifetimes.

Google claims to own a quantum computer which can perform tasks 100,000,000 times faster than its most powerful classical computer.

Indeed, computer scientists have already demonstrated that quantum processing can encrypt data in such a way it becomes impossible to hack.

This alone could transform online security, rendering spyware and most modern malware redundant, while ensuring a far safer world for consumers and businesses.

Quantum computing may be able to process the vast repositories of digital information being generated by billions of AI devices, which would otherwise result in huge data siloes.

It could unlock the secrets of our universe, helping us to achieve nuclear fusion or test drugs in ways wed never be able to accomplish with classical computing and brainpower alone.

Unfortunately, there are certain obstacles in the way of achieving full quantum computing potential.

The molecular instability involved in superpositions requires processors to be stored at cryogenic temperatures as close to absolute zero (-273C) as possible.

Devices need to be stored and handled with exceptional care, which in turn makes them incredibly expensive and unsuitable for domestic deployment.

And while the ability to develop uncrackable encryption algorithms is appealing, a quantum processor could also unlock almost any existing encryption method.

The havoc that could wreak in the wrong hands doesnt bear thinking about, and scientists are struggling to develop quantum-resistant algorithms for classical computers.

Like all emerging technologies, quantum computing has some way to go before it achieves mainstream adoption and acceptance.

When it does, the world will be a very different place.

Follow this link:
Is quantum computing about to change the world? - BroadbandDeals

Can artificial intelligence predict how sick you’ll get from COVID-19? UC San Diego scientists think so – The San Diego Union-Tribune

A team of San Diego scientists is harnessing artificial intelligence to understand why COVID-19 symptoms can vary dramatically from one person to the next information that could prove useful in the continued fight against the coronavirus and future pandemics.

Researchers pored through publicly available data to see how other viruses alter which genes our cells turn on or off. Using that information, they found a set of genes activated across a wide range of infections, including the novel coronavirus. Those genes predicted whether someone would have a mild or a severe case of COVID-19, and whether they were likely to have a lengthy hospital stay.

A UC San Diego-led team joined by researchers at Scripps Research and the La Jolla Institute for Immunology published the findings June 11. The studys authors say their approach could help determine whether new treatments and vaccines are working.

When the whole world faced this pandemic, it took several months for people to scramble to understand the new virus, said Dr. Pradipta Ghosh, a UCSD cell biologist and one of the studys authors. I think we need more of this computational framework to guide us in panic states like this.

The project began in March 2020, when Ghosh teamed up with UCSD computer scientist Debashis Sahoo to better understand why the novel coronavirus was causing little to no symptoms in some people while wreaking havoc on others.

There was just one problem: The novel coronavirus was, well, novel, meaning there wasnt much data to learn from.

So Sahoo and Ghosh took a different tack. They went to public databases and downloaded 45,000 samples from a wide array of viral infections, including Ebola, Zika, influenza, HIV, and hepatitis C virus, among others.

Their hope was to find a shared response pattern to these viruses, and thats exactly what they saw: 166 genes that were consistently cranked up during infection. Among that list, 20 genes generally separated patients with mild symptoms from those who became severely ill.

The coronavirus was no exception. Sahoo and Ghosh say they identified this common viral response pattern well before testing it in samples from COVID-19 patients and infected cells, yet the results held up surprisingly well.

It seemed to work in every data set we used, Sahoo said. It was hard to believe.

They say their findings show that respiratory failure in COVID-19 patients is the result of overwhelming inflammation that damages the airways and, over time, makes immune cells less effective.

Stanfords Purvesh Khatri isnt surprised. His lab routinely uses computer algorithms and statistics to find patterns in large sets of immune response data. In 2015, Khatris group found that respiratory viruses trigger a common response. And in April, they reported that this shared response applied to a range of other viruses, too, including the novel coronavirus.

That makes sense, Khatri says, because researchers have long known there are certain genes the immune system turns on in response to virtually any viral infection.

Overall, the idea is pretty solid, said Khatri of the recent UCSD-led study. The genes are all (the) usual suspects.

Sahoo and Ghosh continue to test their findings in new coronavirus data as it becomes available. Theyre particularly interested in COVID-19 long-haulers. Ghosh says theyre already seeing that people with prolonged coronavirus symptoms have distinct gene activation patterns compared to those whove fully recovered. Think of it like a smoldering fire that wont die out.

The researchers ultimate hope isnt just to predict and understand severe disease, but to stop it. For example, they say, a doctor could give a patient a different therapy if a blood sample suggests theyre likely to get sicker with their current treatment. Ghosh adds that the gene pattern theyre seeing could help identify promising new treatments and vaccines against future pandemics based on which therapies prevent responses linked to severe disease.

In unknown, uncharted territory, this provides guard rails for us to start looking around, understand (the virus), find solutions, build better models and, finally, find therapeutics.

See original here:
Can artificial intelligence predict how sick you'll get from COVID-19? UC San Diego scientists think so - The San Diego Union-Tribune

Artificial Intelligence Revolutionizes Waste Collection – University of San Diego Website

Thursday, June 17, 2021 post has videoTOPICS: Academics, Changemaker, , Research, Sustainability

Using artificial intelligence, a team of computer science students are setting out to revolutionize the waste collection and recycling industry.

For Mohammed Aljaroudi, Khaled Aloumi, Tatiana Barbone and Faisal Binateeq, their work with Top Mobile Vision has been an opportunity to redefine what waste collection looks like. With cameras mounted on vehicles, the team created a website to track service, helping to understand system efficiencies and opportunities for change in the industry.

For this project we are taking the footage from these cameras and translating it into useful data for the customers of Top Mobile Vision, says Binateeq, a 2021 computer science graduate from the University of San Diego Shiley-Marcos School of Engineering. With the footage, we can see when the bin is lifted and we can translate that [into data] using technology of machine learning and QR codes to identify the bins.

Through an interactive website, data is collected and updated continuously, enabling clients to evaluate collection processes and modify service as they go.

For Binateeq, the opportunity to work on this long-term project with a team of dedicated colleagues has been a unique experience a collaboration he is looking forward to continuing into the future.

Allyson Meyer 16 (BA), 21 (MBA)

Read more here:
Artificial Intelligence Revolutionizes Waste Collection - University of San Diego Website

Spotlight on AI: Latest Developments in the Field of Artificial Intelligence – Analytics Insight

Whats new in the world of artificial intelligence?

Artificial intelligence is changing the course of our lives with its constant developments. Before the pandemic and now in the new normal, AI remains to be a key trend in the tech industry. It is reaching wider audiences as years pass and scientists, engineers, and entrepreneurs who involve themselves with modern technologies are reaping the benefits of AI and its branches, IoT and machine learning.

Organizations that overlooked digital transformation and the power of artificial intelligence are picking the pace of AI adoption. When COVID-19 was creating chaos across industries, it became evident that disruptive technologies and the automation that comes with it are more than crucial.

While 2020 was a great year for artificial intelligence working with its true potential, here are the latest advancements in the field of AI that are promising exciting times for the future of this technology.

Researchers from the University of Gothenburg have found an artificial intelligence model to predict what kind of virus can possibly spread from animals to humans. Using artificial intelligence, the algorithm studies the role of carbohydrates to understand the infection path. In scientific terms, carbohydrates are called glycans and they play a significant role in the way our bodies function. Almost all viruses first affect the glycans in our bodies, so did the coronavirus. Led by Daniel Bojar, assistant professor at the University of Gothenburg, the new AI model can analyze glycans with improved accuracy. The model analyses the infection process by predicting new virus-to-glycan interactions to better understand zoonotic diseases.

The world is evolving with disruptive technologies and that includes hackers and cyber attackers. Cyberattacks have become more common amidst the remote working culture, where sensitive files and documents have become the prime targets. To deal with this pressing concern, V.S.Subrahmanian, a cybersecurity researcher at Dartmouth College, created an algorithm called Word Embedding-based Fake Online Repository Generation Engine (WE-FORGE) that generates fake patents that are under development. This makes it difficult for hackers to find what they are looking for. The system generates convincing fakes based on the keywords of a given document. For each keyword it identifies, it analyses a list of related topics and replaces the original keyword with one randomly chosen related word.

DataRobot announced its second major platform launch, DataRobot version 7.1 with new MLOps management agents, time series model enhancements, and automated AI reports. With an aim to provide lifecycle management for remote AI and machine learning models, DataRobots new launch will offer feature discovery push-down integration for Snowflake and time series Eureqa model improvements. Through this, Snowflake users can use automatic discovery and computation of individual independent variables in the Snowflake data cloud. Apart from these additions, DataRobot also provides a no-code app builder that has the ability to convert deployed models into AI apps with no coding.

Exscientias US$60M acquisition of Allcyte will boost AI-drug discovery. Allcyte is an Austrian company that is developing an artificial intelligence platform to study how cancer treatments work on different individuals. Post the acquisition, this technology will work with Exscientias native software that uses AI to identify potential drug targets, build the right drugs, and send them for trials. Exscientia will now be able to work with a precision medicine approach to design drug molecules, ensuring improved efficiency.

The creator of the SaaS delivery model for financial services billing solutions, Redi2 Technologies, has announced a collaboration with IBM Private Cloud Services to improve flexibility. The combination of these technologies will add strong value for Redi2 Revenue Manager clients. Top asset managers throughout the world can take advantage of the improvements like fast reaction options for clients who need quick responses to changes, move data from one country to another or expand their infrastructure requirements.

Share This ArticleDo the sharing thingy

Continued here:
Spotlight on AI: Latest Developments in the Field of Artificial Intelligence - Analytics Insight

Evolution, rewards, and artificial intelligence – TechTalks

This article is part of the philosophy of artificial intelligence, a series of posts that explore the ethical, moral, and social implications of AI today and in the future

Last week, I wrote an analysis of Reward Is Enough, a paper by scientists at DeepMind. As the title suggests, the researchers hypothesize that the right reward is all you need to create the abilities associated with intelligence, such as perception, motor functions, and language.

This is in contrast with AI systems that try to replicate specific functions of natural intelligence such as classifying images, navigating physical environments, or completing sentences.

The researchers go as far as suggesting that with well-defined reward, a complex environment, and the right reinforcement learning algorithm, we will be able to reach artificial general intelligence, the kind of problem-solving and cognitive abilities found in humans and, to a lesser degree, in animals.

The article and the paper triggered a heated debate on social media, with reactions going from full support of the idea to outright rejection. Of course, both sides make valid claims. But the truth lies somewhere in the middle. Natural evolution is proof that the reward hypothesis is scientifically valid. But implementing the pure reward approach to reach human-level intelligence has some very hefty requirements.

In this post, Ill try to disambiguate in simple terms where the line between theory and practice stands.

In their paper, the DeepMind scientists present the following hypothesis: Intelligence, and its associated abilities, can be understood as subserving the maximisation of reward by an agent acting in its environment.

Scientific evidence supports this claim.

Humans and animals owe their intelligence to a very simple law: natural selection. Im not an expert on the topic, but I suggest reading The Blind Watchmaker by biologist Richard Dawkins, which provides a very accessible account of how evolution has led to all forms of life and intelligence on out planet.

In a nutshell, nature gives preference to lifeforms that are better fit to survive in their environments. Those that can withstand challenges posed by the environment (weather, scarcity of food, etc.) and other lifeforms (predators, viruses, etc.) will survive, reproduce, and pass on their genes to the next generation. Those that dont get eliminated.

According to Dawkins, In nature, the usual selecting agent is direct, stark and simple. It is the grim reaper. Of course, the reasons for survival are anything but simple that is why natural selection can build up animals and plants of such formidable complexity. But there is something very crude and simple about death itself. And nonrandom death is all it takes to select phenotypes, and hence the genes that they contain, in nature.

But how do different lifeforms emerge? Every newly born organism inherits the genes of its parent(s). But unlike the digital world, copying in organic life is not an exact thing. Therefore, offspring often undergo mutations, small changes to their genes that can have a huge impact across generations. These mutations can have a simple effect, such as a small change in muscle texture or skin color. But they can also become the core for developing new organs (e.g., lungs, kidneys, eyes), or shedding old ones (e.g., tail, gills).

If these mutations help improve the chances of the organisms survival (e.g., better camouflage or faster speed), they will be preserved and passed on to future generations, where further mutations might reinforce them. For example, the first organism that developed the ability to parse light information had an enormous advantage over all the others that didnt, even though its ability to see was not comparable to that of animals and humans today. This advantage enabled it to better survive and reproduce. As its descendants reproduced, those whose mutations improved their sight outmatched and outlived their peers. Through thousands (or millions) of generations, these changes resulted in a complex organ such as the eye.

The simple mechanisms of mutation and natural selection has been enough to give rise to all the different lifeforms that we see on Earth, from bacteria to plants, fish, birds, amphibians, and mammals.

The same self-reinforcing mechanism has also created the brain and its associated wonders. In her book Conscience: The Origin of Moral Intuition, scientist Patricia Churchland explores how natural selection led to the development of the cortex, the main part of the brain that gives mammals the ability to learn from their environment. The evolution of the cortex has enabled mammals to develop social behavior and learn to live in herds, prides, troops, and tribes. In humans, the evolution of the cortex has given rise to complex cognitive faculties, the capacity to develop rich languages, and the ability to establish social norms.

Therefore, if you consider survival as the ultimate reward, the main hypothesis that DeepMinds scientists make is scientifically sound. However, when it comes to implementing this rule, things get very complicated.

In their paper, DeepMinds scientists make the claim that the reward hypothesis can be implemented with reinforcement learning algorithms, a branch of AI in which an agent gradually develops its behavior by interacting with its environment. A reinforcement learning agent starts by making random actions. Based on how those actions align with the goals it is trying to achieve, the agent receives rewards. Across many episodes, the agent learns to develop sequences of actions that maximize its reward in its environment.

According to the DeepMind scientists, A sufficiently powerful and general reinforcement learning agent may ultimately give rise to intelligence and its associated abilities. In other words, if an agent can continually adjust its behaviour so as to improve its cumulative reward, then any abilities that are repeatedly demanded by its environment must ultimately be produced in the agents behaviour.

In an online debate in December, computer scientist Richard Sutton, one of the papers co-authors, said, Reinforcement learning is the first computational theory of intelligence In reinforcement learning, the goal is to maximize an arbitrary reward signal.

DeepMind has a lot of experience to prove this claim. They have already developed reinforcement learning agents that can outmatch humans in Go, chess, Atari, StarCraft, and other games. They have also developed reinforcement learning models to make progress in some of the most complex problems of science.

The scientists further wrote in their paper, According to our hypothesis, general intelligence can instead be understood as, and implemented by, maximising a singular reward in a single, complex environment [emphasis mine].

This is where hypothesis separates from practice. The keyword here is complex. The environments that DeepMind (and its quasi-rival OpenAI) have so far explored with reinforcement learning are not nearly as complex as the physical world. And they still required the financial backing and vast computational resources of very wealthy tech companies. In some cases, they still had to dumb down the environments to speed up the training of their reinforcement learning models and cut down the costs. In others, they had to redesign the reward to make sure the RL agents did not get stuck the wrong local optimum.

(It is worth noting that the scientists do acknowledge in their paper that they cant offer theoretical guarantee on the sample efficiency of reinforcement learning agents.)

Now, imagine what it would take to use reinforcement learning to replicate evolution and reach human-level intelligence. First you would need a simulation of the world. But at what level would you simulate the world? My guess is that anything short of quantum scale would be inaccurate. And we dont have a fraction of the compute power needed to create quantum-scale simulations of the world.

Lets say we did have the compute power to create such a simulation. We could start at around 4 billion years ago, when the first lifeforms emerged. You would need to have an exact representation of the state of Earth at the time. We would need to know the initial state of the environment at the time. And we still dont have a definite theory on that.

An alternative would be to create a shortcut and start from, say, 8 million years ago, when our monkey ancestors still lived on earth. This would cut down the time of training, but we would have a much more complex initial state to start from. At that time, there were millions of different lifeforms on Earth, and they were closely interrelated. They evolved together. Taking any of them out of the equation could have a huge impact on the course of the simulation.

Therefore, you basically have two key problems: compute power and initial state. The further you go back in time, the more compute power youll need to run the simulation. On the other hand, the further you move forward, the more complex your initial state will be. And evolution has created all sorts of intelligent and non-intelligent lifeforms and making sure that we could reproduce the exact steps that led to human intelligence without any guidance and only through reward is a hard bet.

Many will say that you dont need an exact simulation of the world and you only need to approximate the problem space in which your reinforcement learning agent wants to operate in.

For example, in their paper, the scientists mention the example of a house-cleaning robot: In order for a kitchen robot to maximise cleanliness, it must presumably have abilities of perception (to differentiate clean and dirty utensils), knowledge (to understand utensils), motor control (to manipulate utensils), memory (to recall locations of utensils), language (to predict future mess from dialogue), and social intelligence (to encourage young children to make less mess). A behaviour that maximises cleanliness must therefore yield all these abilities in service of that singular goal.

This statement is true, but downplays the complexities of the environment. Kitchens were created by humans. For instance, the shape of drawer handles, doorknobs, floors, cupboards, walls, tables, and everything you see in a kitchen has been optimized for the sensorimotor functions of humans. Therefore, a robot that would want to work in such an environment would need to develop sensorimotor skills that are similar to those of humans. You can create shortcuts, such as avoiding the complexities of bipedal walking or hands with fingers and joints. But then, there would be incongruencies between the robot and the humans who will be using the kitchens. Many scenarios that would be easy to handle for a human (walking over an overturned chair) would become prohibitive for the robot.

Also, other skills, such as language, would require even more similar infrastructure between the robot and the humans who would share the environment. Intelligent agents must be able to develop abstract mental models of each other to cooperate or compete in a shared environment. Language omits many important details, such as sensory experience, goals, needs. We fill in the gaps with our intuitive and conscious knowledge of our interlocutors mental state. We might make wrong assumptions, but those are the exceptions, not the norm.

And finally, developing a notion of cleanliness as a reward is very complicated because it is very tightly linked to human knowledge, life, and goals. For example, removing every piece of food from the kitchen would certainly make it cleaner, but would the humans using the kitchen be happy about it?

A robot that has been optimized for cleanliness would have a hard time co-existing and cooperating with living beings that have been optimized for survival.

Here, you can take shortcuts again by creating hierarchical goals, equipping the robot and its reinforcement learning models with prior knowledge, and using human feedback to steer it in the right direction. This would help a lot in making it easier for the robot to understand and interact with humans and human-designed environments. But then you would be cheating on the reward-only approach. And the mere fact that your robot agent starts with predesigned limbs and image-capturing and sound-emitting devices is itself the integration of prior knowledge.

In theory, reward only is enough for any kind of intelligence. But in practice, theres a tradeoff between environment complexity, reward design, and agent design.

In the future, we might be able to achieve a level of computing power that will make it possible to reach general intelligence through pure reward and reinforcement learning. But for the time being, what works is hybrid approaches that involve learning and complex engineering of rewards and AI agent architectures.

Read the original post:
Evolution, rewards, and artificial intelligence - TechTalks