Archive for the ‘Artificial Intelligence’ Category

The Promise Of Artificial Intelligence In Chillers And Rooftops – ACHR NEWS

The Promise Of Artificial Intelligence In Chillers And Rooftops | 2020-05-15 | ACHR News This website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more. This Website Uses CookiesBy closing this message or continuing to use our site, you agree to our cookie policy. Learn MoreThis website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more.

See original here:
The Promise Of Artificial Intelligence In Chillers And Rooftops - ACHR NEWS

Trial of Artificial Intellience boosts IVF success and brings joy to Queensland couple – 9News

Couples are putting their trust in artificial intelligence to help them become parents, with an Australia-first trial proving a success.

Sarah and Tim Keys from Queensland have been trying to conceive for a number of years and after suffering a number of miscarriages decided to turn to IVF.

When their GP suggested joining the AI trial, the couple did their research and discovered it would improve their chances of getting through the pregnancy.

"It's really hard to go through those miscarriages so anything that could decrease the chances, let's go with that," Ms Keys said.

Doctors are hailing the technology as the biggest leap forward in IVF in over three decades.

"It's completely new, completely different and ... it's all to do with the evolution of computer technology," Associate Professor Anusch Yazdani from the Queensland Fertility Group said.

As part of the international study, led by national fertility provider Virtus Health, 1000 patients will be recruited at five IVF clinics across Australia, alongside sites in Ireland and Denmark.

During each IVF cycle, embryos will be grown in an incubator fitted with tiny time-lapse cameras which will record 115,000 images over five days.

Each embryo is then given a rating based on predicted fetal heart outcomes and the one with the greatest chance of survival is implanted.

If the trial is successful the technology will be rolled out around the world.

So far, the trial at seven fertility clinics around the country has a 90 per cent success rate.

"That's much better than our embryologists have managed to do so this is a really exciting time to," Professor Yazdani said.

Ms Keys is now 26 weeks pregnant and cautiously optimistic for the future.

"We're very excited we're expecting a little girl," she said.

"I think we'll still be a bit stressed until we're holding her, but where we're at, at the moment is really awesome."

Read the rest here:
Trial of Artificial Intellience boosts IVF success and brings joy to Queensland couple - 9News

The Future of Artificial Intelligence: Edge Intelligence – Analytics Insight

With the advancements in deep learning, the recent years have seen a humongous growth of artificial intelligence (AI) applications and services, traversing from personal assistant to recommendation systems to video/audio surveillance. All the more as of late, with the expansion of mobile computing and Internet of Things (IoT), billions of mobile and IoT gadgets are connected with the Internet, creating zillions of bytes of information at the network edge.

Driven by this pattern, there is a pressing need to push the AI frontiers to the network edge in order to completely release the potential of the edge big data. To satisfy this need, edge computing, an emerging paradigm that pushes computing undertakings and services from the network core to the network edge, has been generally perceived as a promising arrangement. The resulting new interdiscipline, edge AI or edge intelligence (EI), is starting to get an enormous amount of interest.

In any case, research on EI is still in its earliest stages, and a devoted scene for trading the ongoing advances of EI is exceptionally wanted by both the computer system and AI people group. The dissemination of EI doesnt mean, clearly, that there wont be a future for a centralized CI (Cloud Intelligence). The orchestrated utilization of Edge and Cloud virtual assets, truth be told, is required to make a continuum of intelligent capacities and functions over all the Cloudifed foundations. This is one of the significant challenges for a fruitful deployment of a successful and future-proof 5G.

Given the expanding markets and expanding service and application demands put on computational data and power, there are a few factors and advantages driving the development of edge computing. In view of the moving needs of dependable, adaptable and contextual data, a lot of the data is moving locally to on-device processing, bringing about improved performance and response time (in under a couple of milliseconds), lower latency, higher power effectiveness, improved security since information is held on the device and cost savings as data-center transports are minimized.

Probably the greatest advantage of edge computing is the capacity to make sure about real-time results for time-sensitive needs. Much of the time, sensor information can be gathered, analyzed, and communicated immediately, without sending the information to a time-sensitive cloud center. Scalability across different edge devices to help speed local decision-making is fundamental. The ability to give immediate and dependable information builds certainty, increases customer engagement, and, in many cases, saves lives. Simply think about all of the businesses, home security, aviation, car, smart cities, health care in which the immediate understanding of diagnostics and equipment performance is critical.

Indeed, recent advances in AI may have an extensive effect in various subfields of ongoing networking. For example, traffic prediction and characterization are two of the most contemplated uses of AI in the networking field. DL is likewise offering promising solutions for proficient resource management and network adoption therefore improving, even today, network system performance (e.g., traffic scheduling, routing and TCP congestion control). Another region where EI could bring performance advantages is a productive resource management and network adaption. Example issues to address traffic scheduling, routing, and TCP congestion control.

Then again, today it is somewhat challenging to structure a real-time framework with overwhelming computation loads and big data. This is where EC enters the scene. An orchestrated execution of AI methods in the computing assets in the cloud as well as at the edge, where most information is produced, will help towards this path. In addition, gathering and filtering a lot of information that contain both network profiles and performance measurements is still extremely crucial and that question turns out to be much progressively costly while considering the need of data labelling. Indeed, even these bottlenecks could be confronted by empowering EI ecosystems equipped for drawing in win-win collaborations between Network/Service Providers, OTTs, Technology Providers, Integrators and Users.

A further dimension is that a network embedded pervasive intelligence (Cloud Computing integrated with Edge Intelligence in the network nodes and smarter-and-smarter terminals) could likewise prepare to utilize the accomplishments of the developing distributed ledger technologies and platforms.

Edge computing gives an option in contrast to the long-distance transfer of data between connected devices and remote cloud servers. With a database management system on the edge devices, organizations can accomplish prompt knowledge and control and DBMS performance wipes out the reliance on latency, data rate, and bandwidth. It also lessens threats through a comprehensive security approach. Edge computing gives an environment to deal with the whole cybersecurity endeavors of the intelligent edge and the wise cloud. Binding together management systems can give intelligent threat protection.

It maintains compliance regulations entities like the General Data Protection Regulation (GDPR) that oversee the utilization of private information. Companies that dont comply risk through a significant expense. Edge computing offers various controls that can assist companies with ensuring private data and accomplish GDPR compliance.

Innovative organizations, for example, Amazon, Google, Apple, BMW, Volkswagen, Tesla, Airbus, Fraunhofer, Vodafone, Deutsche Telekom, Ericsson, and Harting are presently embracing and supporting their wagers for AI at the edge. Some of these organizations are shaping trade associations, for example, the European Edge Computing Consortium (EECC), to help educate and persuade small, medium-sized, and large enterprises to drive the adoption of edge computing within manufacturing and other industrial markets.

Read the rest here:
The Future of Artificial Intelligence: Edge Intelligence - Analytics Insight

Artificial Intelligence Can Only Help Architecture if We Ask the Right Questions – ArchDaily

Artificial Intelligence Can Only Help Architecture if We Ask the Right Questions

Your browser does not support the video tag.

Facebook

Twitter

Pinterest

Whatsapp

Mail

Or

AI in the architecture industry has provoked much debate in recent years - yet it seems like very few of us know exactly what it is or why ithas created this storm of emotions. There are professionals researching AI who know more about the field than I do, but I have hands-on experience using AI and algorithms in my designover the past 10 years through various projects. This is one of the challenges that our field faces. How can we make practical use of these new tools?

Many people have reached out to me claiming that AI could not do their job and that being an architect is so much more than just composing a plan drawing or calculating the volume of a building envelope. They are right. But having said that, there is no reasonnot to be open to the possibility that AIcan help usdesign even better buildings. There are a lot of tasks that are much better solved with computation than manually, and vice versa. In general, if we are able to reduce a problem to numbersor clearly define what we are trying to solve, AI will probably be able to solve it. If weare looking forsubjective opinions or emotions, it might be trickier for an AI to help. Or to be more precise, it might be trickier for us to provide the AI with the right tools to subjectively analyze our designs.

When we talk about AI within the field of architecture it often boils down to optimization. Where can we find more sellable square meters or how can we get more daylight into dark apartments? A bigger building and more windows might be the answer, but what other parameters might be affected by this?

Where there are a lot of parameters at stake that need to be weighed against each other, AI can help us a lot. Millions of scenarios can be evaluated, and the best selected in the same amount of timethat it takes for us to ride the subway to work.Our AI will present us with the ultimate suggestion based on the parameters we provided.

What if we forgot something? As soon as we start to optimize, we have to consider that the result will be no better than the parameters, training sets, and preferences we provided the AI with for solving the task. If we were to ask a thousand different people Whos the better architect, Zaha Hadid or Le Corbusier? we would probably get an even split of answers motivated by a thousand different reasons, since the question is highly subjective. In this case, there is no right or wrong, but if we asked who had designed the highest number of buildings, we could get a correct answer. Even if the answerfrom your AI is the correct one and mathematically optimal, you must consider if the question itself was right.

Your browser does not support the video tag.

Another important part of optimization is the question of how to weigh different features against each other. Is Gross Floor Area (GFA) more important than daylight and if it is, how much more? This is a decision that the architect, the designer of the algorithm, or the client needs to decide. Humans have opinions, a specific taste, preferred style, and so on. AI does not.

Optimizing for maximum gross floor area in parallel witha daylight analysis will give you a certain result, but it might not be the same thing as designing a great building. Yet on the flip side, not being able to meet the clients expectations for GFA or not being able to make an apartment inhabitable due to lack of light might result in no building at all.

AI presentsmany new opportunities for our profession, and I believe that the architect is harder to replace with AI than many other professions due to our job's subjective nature. The decisions we make to create great buildings often depend on opinions, and as a result there is no right or wrong. But I also believe that there are a lot of things we can improve on. We do not have to go as far as using AI: in many cases, we would benefit a lot from simple automation. There are many manual tasks performed by architects at the moment that has to be done to realize a project but do not add any value for the final product. If AI or automation can help us with these tasks, we can spend more time doing what we do best - which is designing great architecture, adding value to project inhabitants and our cities more widely.

View post:
Artificial Intelligence Can Only Help Architecture if We Ask the Right Questions - ArchDaily

Reducing the carbon footprint of artificial intelligence – MIT News

Artificial intelligence has become a focus of certain ethical concerns, but it also has some major sustainability issues.

Last June, researchers at the University of Massachusetts at Amherst released a startling report estimating that the amount of power required for training and searching a certain neural network architecture involves the emissions of roughly 626,000 pounds of carbon dioxide. Thats equivalent to nearly five times the lifetime emissions of the average U.S. car, including its manufacturing.

This issue gets even more severe in the model deployment phase, where deep neural networks need to be deployed on diverse hardware platforms, each with different properties and computational resources.

MIT researchers have developed a new automated AI system for training and running certain neural networks. Results indicate that, by improving the computational efficiency of the system in some key ways, the system can cut down the pounds of carbon emissions involved in some cases, down to low triple digits.

The researchers system, which they call a once-for-all network, trains one large neural network comprising many pretrained subnetworks of different sizes that can be tailored to diverse hardware platforms without retraining. This dramatically reduces the energy usually required to train each specialized neural network for new platforms which can include billions of internet of things (IoT) devices. Using the system to train a computer-vision model, they estimated that the process required roughly 1/1,300 the carbon emissions compared to todays state-of-the-art neural architecture search approaches, while reducing the inference time by 1.5-2.6 times.

The aim is smaller, greener neural networks, says Song Han, an assistant professor in the Department of Electrical Engineering and Computer Science. Searching efficient neural network architectures has until now had a huge carbon footprint. But we reduced that footprint by orders of magnitude with these new methods.

The work was carried out on Satori, an efficient computing cluster donated to MIT by IBM that is capable of performing 2 quadrillion calculations per second. The paper is being presented next week at the International Conference on Learning Representations. Joining Han on the paper are four undergraduate and graduate students from EECS, MIT-IBM Watson AI Lab, and Shanghai Jiao Tong University.

Creating a once-for-all network

The researchers built the system on a recent AI advance called AutoML (for automatic machine learning), which eliminates manual network design. Neural networks automatically search massive design spaces for network architectures tailored, for instance, to specific hardware platforms. But theres still a training efficiency issue: Each model has to be selected then trained from scratch for its platform architecture.

How do we train all those networks efficiently for such a broad spectrum of devices from a $10 IoT device to a $600 smartphone? Given the diversity of IoT devices, the computation cost of neural architecture search will explode, Han says.

The researchers invented an AutoML system that trains only a single, large once-for-all (OFA) network that serves as a mother network, nesting an extremely high number of subnetworks that are sparsely activated from the mother network. OFA shares all its learned weights with all subnetworks meaning they come essentially pretrained. Thus, each subnetwork can operate independently at inference time without retraining.

The team trained an OFA convolutional neural network (CNN) commonly used for image-processing tasks with versatile architectural configurations, including different numbers of layers and neurons, diverse filter sizes, and diverse input image resolutions. Given a specific platform, the system uses the OFA as the search space to find the best subnetwork based on the accuracy and latency tradeoffs that correlate to the platforms power and speed limits. For an IoT device, for instance, the system will find a smaller subnetwork. For smartphones, it will select larger subnetworks, but with different structures depending on individual battery lifetimes and computation resources. OFA decouples model training and architecture search, and spreads the one-time training cost across many inference hardware platforms and resource constraints.

This relies on a progressive shrinking algorithm that efficiently trains the OFA network to support all of the subnetworks simultaneously. It starts with training the full network with the maximum size, then progressively shrinks the sizes of the network to include smaller subnetworks. Smaller subnetworks are trained with the help of large subnetworks to grow together. In the end, all of the subnetworks with different sizes are supported, allowing fast specialization based on the platforms power and speed limits. It supports many hardware devices with zero training cost when adding a new device.In total, one OFA, the researchers found, can comprise more than 10 quintillion thats a 1 followed by 19 zeroes architectural settings, covering probably all platforms ever needed. But training the OFA and searching it ends up being far more efficient than spending hours training each neural network per platform. Moreover, OFA does not compromise accuracy or inference efficiency. Instead, it provides state-of-the-art ImageNet accuracy on mobile devices. And, compared with state-of-the-art industry-leading CNN models , the researchers say OFA provides 1.5-2.6 times speedup, with superior accuracy. Thats a breakthrough technology, Han says. If we want to run powerful AI on consumer devices, we have to figure out how to shrink AI down to size.

The model is really compact. I am very excited to see OFA can keep pushing the boundary of efficient deep learning on edge devices, says Chuang Gan, a researcher at the MIT-IBM Watson AI Lab and co-author of the paper.

If rapid progress in AI is to continue, we need to reduce its environmental impact, says John Cohn, an IBM fellow and member of the MIT-IBM Watson AI Lab. The upside of developing methods to make AI models smaller and more efficient is that the models may also perform better.

More here:
Reducing the carbon footprint of artificial intelligence - MIT News