Archive for the ‘Artificial Intelligence’ Category

The Future of Artificial Intelligence: Edge Intelligence – Analytics Insight

With the advancements in deep learning, the recent years have seen a humongous growth of artificial intelligence (AI) applications and services, traversing from personal assistant to recommendation systems to video/audio surveillance. All the more as of late, with the expansion of mobile computing and Internet of Things (IoT), billions of mobile and IoT gadgets are connected with the Internet, creating zillions of bytes of information at the network edge.

Driven by this pattern, there is a pressing need to push the AI frontiers to the network edge in order to completely release the potential of the edge big data. To satisfy this need, edge computing, an emerging paradigm that pushes computing undertakings and services from the network core to the network edge, has been generally perceived as a promising arrangement. The resulting new interdiscipline, edge AI or edge intelligence (EI), is starting to get an enormous amount of interest.

In any case, research on EI is still in its earliest stages, and a devoted scene for trading the ongoing advances of EI is exceptionally wanted by both the computer system and AI people group. The dissemination of EI doesnt mean, clearly, that there wont be a future for a centralized CI (Cloud Intelligence). The orchestrated utilization of Edge and Cloud virtual assets, truth be told, is required to make a continuum of intelligent capacities and functions over all the Cloudifed foundations. This is one of the significant challenges for a fruitful deployment of a successful and future-proof 5G.

Given the expanding markets and expanding service and application demands put on computational data and power, there are a few factors and advantages driving the development of edge computing. In view of the moving needs of dependable, adaptable and contextual data, a lot of the data is moving locally to on-device processing, bringing about improved performance and response time (in under a couple of milliseconds), lower latency, higher power effectiveness, improved security since information is held on the device and cost savings as data-center transports are minimized.

Probably the greatest advantage of edge computing is the capacity to make sure about real-time results for time-sensitive needs. Much of the time, sensor information can be gathered, analyzed, and communicated immediately, without sending the information to a time-sensitive cloud center. Scalability across different edge devices to help speed local decision-making is fundamental. The ability to give immediate and dependable information builds certainty, increases customer engagement, and, in many cases, saves lives. Simply think about all of the businesses, home security, aviation, car, smart cities, health care in which the immediate understanding of diagnostics and equipment performance is critical.

Indeed, recent advances in AI may have an extensive effect in various subfields of ongoing networking. For example, traffic prediction and characterization are two of the most contemplated uses of AI in the networking field. DL is likewise offering promising solutions for proficient resource management and network adoption therefore improving, even today, network system performance (e.g., traffic scheduling, routing and TCP congestion control). Another region where EI could bring performance advantages is a productive resource management and network adaption. Example issues to address traffic scheduling, routing, and TCP congestion control.

Then again, today it is somewhat challenging to structure a real-time framework with overwhelming computation loads and big data. This is where EC enters the scene. An orchestrated execution of AI methods in the computing assets in the cloud as well as at the edge, where most information is produced, will help towards this path. In addition, gathering and filtering a lot of information that contain both network profiles and performance measurements is still extremely crucial and that question turns out to be much progressively costly while considering the need of data labelling. Indeed, even these bottlenecks could be confronted by empowering EI ecosystems equipped for drawing in win-win collaborations between Network/Service Providers, OTTs, Technology Providers, Integrators and Users.

A further dimension is that a network embedded pervasive intelligence (Cloud Computing integrated with Edge Intelligence in the network nodes and smarter-and-smarter terminals) could likewise prepare to utilize the accomplishments of the developing distributed ledger technologies and platforms.

Edge computing gives an option in contrast to the long-distance transfer of data between connected devices and remote cloud servers. With a database management system on the edge devices, organizations can accomplish prompt knowledge and control and DBMS performance wipes out the reliance on latency, data rate, and bandwidth. It also lessens threats through a comprehensive security approach. Edge computing gives an environment to deal with the whole cybersecurity endeavors of the intelligent edge and the wise cloud. Binding together management systems can give intelligent threat protection.

It maintains compliance regulations entities like the General Data Protection Regulation (GDPR) that oversee the utilization of private information. Companies that dont comply risk through a significant expense. Edge computing offers various controls that can assist companies with ensuring private data and accomplish GDPR compliance.

Innovative organizations, for example, Amazon, Google, Apple, BMW, Volkswagen, Tesla, Airbus, Fraunhofer, Vodafone, Deutsche Telekom, Ericsson, and Harting are presently embracing and supporting their wagers for AI at the edge. Some of these organizations are shaping trade associations, for example, the European Edge Computing Consortium (EECC), to help educate and persuade small, medium-sized, and large enterprises to drive the adoption of edge computing within manufacturing and other industrial markets.

Read the rest here:
The Future of Artificial Intelligence: Edge Intelligence - Analytics Insight

Artificial Intelligence Can Only Help Architecture if We Ask the Right Questions – ArchDaily

Artificial Intelligence Can Only Help Architecture if We Ask the Right Questions

Your browser does not support the video tag.

Facebook

Twitter

Pinterest

Whatsapp

Mail

Or

AI in the architecture industry has provoked much debate in recent years - yet it seems like very few of us know exactly what it is or why ithas created this storm of emotions. There are professionals researching AI who know more about the field than I do, but I have hands-on experience using AI and algorithms in my designover the past 10 years through various projects. This is one of the challenges that our field faces. How can we make practical use of these new tools?

Many people have reached out to me claiming that AI could not do their job and that being an architect is so much more than just composing a plan drawing or calculating the volume of a building envelope. They are right. But having said that, there is no reasonnot to be open to the possibility that AIcan help usdesign even better buildings. There are a lot of tasks that are much better solved with computation than manually, and vice versa. In general, if we are able to reduce a problem to numbersor clearly define what we are trying to solve, AI will probably be able to solve it. If weare looking forsubjective opinions or emotions, it might be trickier for an AI to help. Or to be more precise, it might be trickier for us to provide the AI with the right tools to subjectively analyze our designs.

When we talk about AI within the field of architecture it often boils down to optimization. Where can we find more sellable square meters or how can we get more daylight into dark apartments? A bigger building and more windows might be the answer, but what other parameters might be affected by this?

Where there are a lot of parameters at stake that need to be weighed against each other, AI can help us a lot. Millions of scenarios can be evaluated, and the best selected in the same amount of timethat it takes for us to ride the subway to work.Our AI will present us with the ultimate suggestion based on the parameters we provided.

What if we forgot something? As soon as we start to optimize, we have to consider that the result will be no better than the parameters, training sets, and preferences we provided the AI with for solving the task. If we were to ask a thousand different people Whos the better architect, Zaha Hadid or Le Corbusier? we would probably get an even split of answers motivated by a thousand different reasons, since the question is highly subjective. In this case, there is no right or wrong, but if we asked who had designed the highest number of buildings, we could get a correct answer. Even if the answerfrom your AI is the correct one and mathematically optimal, you must consider if the question itself was right.

Your browser does not support the video tag.

Another important part of optimization is the question of how to weigh different features against each other. Is Gross Floor Area (GFA) more important than daylight and if it is, how much more? This is a decision that the architect, the designer of the algorithm, or the client needs to decide. Humans have opinions, a specific taste, preferred style, and so on. AI does not.

Optimizing for maximum gross floor area in parallel witha daylight analysis will give you a certain result, but it might not be the same thing as designing a great building. Yet on the flip side, not being able to meet the clients expectations for GFA or not being able to make an apartment inhabitable due to lack of light might result in no building at all.

AI presentsmany new opportunities for our profession, and I believe that the architect is harder to replace with AI than many other professions due to our job's subjective nature. The decisions we make to create great buildings often depend on opinions, and as a result there is no right or wrong. But I also believe that there are a lot of things we can improve on. We do not have to go as far as using AI: in many cases, we would benefit a lot from simple automation. There are many manual tasks performed by architects at the moment that has to be done to realize a project but do not add any value for the final product. If AI or automation can help us with these tasks, we can spend more time doing what we do best - which is designing great architecture, adding value to project inhabitants and our cities more widely.

View post:
Artificial Intelligence Can Only Help Architecture if We Ask the Right Questions - ArchDaily

Reducing the carbon footprint of artificial intelligence – MIT News

Artificial intelligence has become a focus of certain ethical concerns, but it also has some major sustainability issues.

Last June, researchers at the University of Massachusetts at Amherst released a startling report estimating that the amount of power required for training and searching a certain neural network architecture involves the emissions of roughly 626,000 pounds of carbon dioxide. Thats equivalent to nearly five times the lifetime emissions of the average U.S. car, including its manufacturing.

This issue gets even more severe in the model deployment phase, where deep neural networks need to be deployed on diverse hardware platforms, each with different properties and computational resources.

MIT researchers have developed a new automated AI system for training and running certain neural networks. Results indicate that, by improving the computational efficiency of the system in some key ways, the system can cut down the pounds of carbon emissions involved in some cases, down to low triple digits.

The researchers system, which they call a once-for-all network, trains one large neural network comprising many pretrained subnetworks of different sizes that can be tailored to diverse hardware platforms without retraining. This dramatically reduces the energy usually required to train each specialized neural network for new platforms which can include billions of internet of things (IoT) devices. Using the system to train a computer-vision model, they estimated that the process required roughly 1/1,300 the carbon emissions compared to todays state-of-the-art neural architecture search approaches, while reducing the inference time by 1.5-2.6 times.

The aim is smaller, greener neural networks, says Song Han, an assistant professor in the Department of Electrical Engineering and Computer Science. Searching efficient neural network architectures has until now had a huge carbon footprint. But we reduced that footprint by orders of magnitude with these new methods.

The work was carried out on Satori, an efficient computing cluster donated to MIT by IBM that is capable of performing 2 quadrillion calculations per second. The paper is being presented next week at the International Conference on Learning Representations. Joining Han on the paper are four undergraduate and graduate students from EECS, MIT-IBM Watson AI Lab, and Shanghai Jiao Tong University.

Creating a once-for-all network

The researchers built the system on a recent AI advance called AutoML (for automatic machine learning), which eliminates manual network design. Neural networks automatically search massive design spaces for network architectures tailored, for instance, to specific hardware platforms. But theres still a training efficiency issue: Each model has to be selected then trained from scratch for its platform architecture.

How do we train all those networks efficiently for such a broad spectrum of devices from a $10 IoT device to a $600 smartphone? Given the diversity of IoT devices, the computation cost of neural architecture search will explode, Han says.

The researchers invented an AutoML system that trains only a single, large once-for-all (OFA) network that serves as a mother network, nesting an extremely high number of subnetworks that are sparsely activated from the mother network. OFA shares all its learned weights with all subnetworks meaning they come essentially pretrained. Thus, each subnetwork can operate independently at inference time without retraining.

The team trained an OFA convolutional neural network (CNN) commonly used for image-processing tasks with versatile architectural configurations, including different numbers of layers and neurons, diverse filter sizes, and diverse input image resolutions. Given a specific platform, the system uses the OFA as the search space to find the best subnetwork based on the accuracy and latency tradeoffs that correlate to the platforms power and speed limits. For an IoT device, for instance, the system will find a smaller subnetwork. For smartphones, it will select larger subnetworks, but with different structures depending on individual battery lifetimes and computation resources. OFA decouples model training and architecture search, and spreads the one-time training cost across many inference hardware platforms and resource constraints.

This relies on a progressive shrinking algorithm that efficiently trains the OFA network to support all of the subnetworks simultaneously. It starts with training the full network with the maximum size, then progressively shrinks the sizes of the network to include smaller subnetworks. Smaller subnetworks are trained with the help of large subnetworks to grow together. In the end, all of the subnetworks with different sizes are supported, allowing fast specialization based on the platforms power and speed limits. It supports many hardware devices with zero training cost when adding a new device.In total, one OFA, the researchers found, can comprise more than 10 quintillion thats a 1 followed by 19 zeroes architectural settings, covering probably all platforms ever needed. But training the OFA and searching it ends up being far more efficient than spending hours training each neural network per platform. Moreover, OFA does not compromise accuracy or inference efficiency. Instead, it provides state-of-the-art ImageNet accuracy on mobile devices. And, compared with state-of-the-art industry-leading CNN models , the researchers say OFA provides 1.5-2.6 times speedup, with superior accuracy. Thats a breakthrough technology, Han says. If we want to run powerful AI on consumer devices, we have to figure out how to shrink AI down to size.

The model is really compact. I am very excited to see OFA can keep pushing the boundary of efficient deep learning on edge devices, says Chuang Gan, a researcher at the MIT-IBM Watson AI Lab and co-author of the paper.

If rapid progress in AI is to continue, we need to reduce its environmental impact, says John Cohn, an IBM fellow and member of the MIT-IBM Watson AI Lab. The upside of developing methods to make AI models smaller and more efficient is that the models may also perform better.

More here:
Reducing the carbon footprint of artificial intelligence - MIT News

Artificial intelligence can take banks to the next level – TechRepublic

Banking has the potential to improve its customer service, loan applications, and billing with the help of AI and natural language processing.

Image: Kubkoo, Getty Images/iStockPhoto

When I was an executive in banking, we struggled with how to transform tellers at our branches into customer service specialists instead of the "order takers" that they were. This struggle with customer service is ongoing for financial institutions. But it's an area in which artificial intelligence (AI), and its ability to work with unstructured data like voice and images, can help.

"There are two things that artificial intelligence does really well," said Ameek Singh, vice president of IBM's Watson applications and solutions. "It's really good with analyzing images and it also performs uniquely well with natural language processing (NLP)."

SEE:Managing AI and ML in the enterprise 2020 (free PDF)(TechRepublic)

AI's ability to process natural language helps behind the scenes as banks interact with their customers. In call center banking transactions, the ability to analyze language can detect emotional nuances from the speaker, and understand linguistic differences such as the difference between American and British English. AI works with other languages as well, understanding the emotional nuances and slang terms that different groups use.

Collectively, real-time feedback from AI aids bank customer service reps in call centersbecause if they know the sentiments of their customers, it's easier for them to relate to customers and to understand customer concerns that might not have been expressed directly.

"We've developed AI models for natural language processing in a multitude of languages, and the AI continues to learn and refine these linguistics models with the help of machine learning (ML)," Singh said.

SEE:AI isn't perfect--but you can get it pretty darn close(TechRepublic)

The result is higher quality NLP that enables better relationships between customers and the call center front line employees who are trying to help them.

But the use of AI in banking doesn't stop there. Singh explained how AI engines like Watson were also helping on the loans and billing side.

"The (mortgage) loan underwriter looks at items like pay stubs and credit card statements. He or she might even make a billing inquiry," Singh said.

Without AI, these document reviews are time consuming and manual. AI changes that because the AI can "read" the document. It understands what the salient information is and also where irrelevant items, like a company logo, are likely to be located. The AI extracts the relevant information, places the information into a loan evaluation model, and can make a loan recommendation that the underwriter reviews, with the underwriter making a final decision.

Of course, banks have had software for years that has performed loan evaluations. However, they haven't had an easy way to process foundational documents such as bills and pay stubs, that go into the loan decisioning process and that AI can now provide.

SEE:These five tech trends will dominate 2020(ZDNet)

The best news of all for financial institutions is that AI modeling and execution don't exclude them.

"The AI is designed to be informed by bank subject matter experts so it can 'learn' the business rules that the bank wants to apply," Singh said. "The benefit is that real subject matter experts get involvednot just the data scientists."

Singh advises banks looking at expanding their use of AI to carefully select their business use cases, without trying to do too much at once.

"Start small instead of using a 'big bang' approach," he said. "In this way, you can continue to refine your AI model and gain success with it that immediately benefits the business."

Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays

Continue reading here:
Artificial intelligence can take banks to the next level - TechRepublic

Health care of tomorrow, today: How artificial intelligence is fighting the current, and future, COVID-19 pandemic | TheHill – The Hill

SARS-COV-2 has upended modern health care, leaving health systems struggling to cope. Addressing a fast-moving and uncontrolled disease requires an equally efficient method of discovery, development and administration. Artificial Intelligence (AI) and Machine Learning driven health care solutions provide such an answer. AI-enabled health care is not the medicine of the future, nor does it mean robot doctors rolling room to room in hospitals treating patients. Instead of a hospital from some future Jetsons-like fantasy, AI is poised to make impactful and urgent contributions to the current health care ecosystem. Already AI-based systems are helping to alleviate the strain on health care providers overwhelmed by a crushing patient load, accelerate diagnostic and reporting systems, and enable rapid development of new drugs and existing drug combinations that better match a patients unique genetic profile and specific symptoms.

For the thousands of patients fighting for their lives against this deadly disease and the health care providers who incur a constant risk of infection, AI provides an accelerated route to understand the biology of COVID-19. Leveraging AI to assist in prediction, correlation and reporting allow health care providers to make informed decisions quickly. With the current standard of PCR based testing requiring up to 48 hours to return a result, New York-based Envisagenics has developed an AI platform that analyzes 1,000 patient samples in parallel in just two hours. Time saves lives, and the company hopes to release the platform for commercial use in the coming weeks.

AI-powered wearables, such as a smart shirt developed by Montreal-based Hexoskin to continuously measure biometrics including respiration effort, cardiac activity, and a host of other metrics, provide options for hospital staff to minimize exposure by limiting the required visits to infected patients. This real-time data provides an opportunity for remote monitoring and creates a unique dataset to inform our understanding of disease progression to fuel innovation and enable the creation of predictive metrics, alleviating strain on clinical staff. Hexoskin has already begun to assist hospitals in New York City with monitoring programs for their COVID-19 patients, and they are developing an AI/ML platform to better assess the risk profile of COVID-19 patients recovering at home. Such novel platforms would offer a chance for providers and researchers to get ahead of the disease and develop more effective treatment plans.

AI also accelerates discovery and enables efficient and effective interrogation of, the necessary chemistry to address COVID-19. An increasing number of companies are leveraging AI/ML to identify new treatment paths, whether from a list of existing molecules or de novo discovery. San Francisco-based Auransa is using AI to map the gene sequence of SARS-COV-2 to its effect on the host to generate a short-list of already approved drugs that have a high likelihood to alleviate symptoms of COVID-19. Similarly, UK-based Healx has set its AI platform to discover combination therapies, identifying multi-drug approaches to simultaneously treat different aspects of the disease pathology to improve patient outcomes. The company analyzed a library of 4,000 approved drugs to map eight million possible pairs and 10.5 billion triplets to generate combination therapy candidates. Preclinical testing will begin in May 2020.

Developers cannot always act alone - realizing the potential of AI often requires the resources of a collaboration to succeed. Generally, the best data sets and the most advanced algorithms do not exist within the same organization, and it is often the case that multiple data sources and algorithms need to be combined for maximum efficacy. Over the last month, we have seen the rise of several collaborations to encourage information sharing and hasten potential outcomes to patients.

Medopad, a UK-based AI developer, has partnered with Johns Hopkins University to mine existing datasets on COVID-19 and relevant respiratory diseases captured by the UK Biobank and similar databases to identify a biomarker associated with a higher risk for COVID-19. A biomarker database is essential in executing long-term population health measures, and can most effectively be generated by an AI system. In the U.S., over 500 leading companies and organizations, including Mayo Clinic, Amazon Web Services and Microsoft, have formed the COVID-19 Healthcare Coalition to assist in coordinating on all COVID-19 related matters. As part of this effort, LabCorp and HD1, among others, have come together to use AI to make testing and diagnostic data available to researchers to help build disease models including predictions of future hotspots and at-risk populations. On the international stage, the recently launched COAI, a consortium of AI-companies being assembled by French-US OWKIN, aims to increase collaborative research, to accelerate the development of effective treatments, and to share COVID-19 findings with the global medical and scientific community.

Leveraging the potential of AI and machine learning capabilities provides a potent tool to the global community in tackling the pandemic. AI presents novel ways to address old problems and opens doors to solving newly developing population health concerns. The work of our health care system, from the research scientists to the nurses and physicians, should be celebrated, and we should embrace the new tools which are already providing tremendous value. With the rapid deployment and integration of AI solutions into the COVID-19 response, the health care of tomorrow is already addressing the challenges we face today.

Brandon Allgood, PhD, is vice chair of the Alliance for Artificial Intelligence in Healthcare, a global advocacy organization dedicated to the discovery, development and delivery of better solutions to improve patient lives. Allgood is a SVP of DS&AI at Integral Health, a computationally driven biotechnology company in Boston.

More:
Health care of tomorrow, today: How artificial intelligence is fighting the current, and future, COVID-19 pandemic | TheHill - The Hill