Archive for the ‘Artificial Intelligence’ Category

Artificial Intelligence being used to collect data amid pandemic to avoid another – – KUSI

SAN DIEGO (KUSI) When it comes to the battle against COVID-19, artificial intelligence is being used on a large scale in order to prevent future pandemics.

As you know, this week offers more positive news in the vaccine world Moderna just announced that its vaccine trial has a 94.5% effective rate this encouraging news is extremely welcomed after Pfizers recent 90%+ vaccine effective rate breakthrough was announced last week as COVID-19 infection rates continue to climb rapidly.

If theres any silver lining we can take away from todays pandemic, its that the use of its collected data in combination with the use of artificial intelligence will better prepare us to prevent and/or more effectively handle any possible future pandemics.

Neil Sahota, Chief Innovation Officer & United Nations A.I. Advisor, joined KUSIs Paul Rudy on Good Morning San Diego to explain how artificial intelligence is being used to right now, to prevent future pandemics.

Sahota gave a Ted Talk on a similar topic that you can view here.

Excerpt from:
Artificial Intelligence being used to collect data amid pandemic to avoid another - - KUSI

Wood and Cognite unite to unlock artificial intelligence solutions for industrial operations – Hydrocarbon Engineering

Cognite, a global industrial artificial intelligence (AI) software company, and Wood, a global engineering and consulting company, have agreed to a strategic partnership that will accelerate industrial transformation by creating AI solutions that enable more connected, sustainable and data-driven operations for heavy-asset, infrastructure and industrial clients.

The collaboration will deliver value faster and at scale, combining Cognites flagship product, Cognite Data Fusion, with Woods multi-sector domain knowledge, data extraction and technology integration expertise to optimise productivity and performance.

Both companies are committed to deploying performance solutions that address the needs of the energy transition, with the collaboration allowing for greater understanding of existing assets and operations, liberating vast amounts of data trapped in fragmented and legacy systems.

President of Automation and Control at Wood, Mark House, said: Wood and Cognite will leverage physics-based models and AI to quickly provide advanced analytics that drive more profitable and sustainable industrial operations.

Through the partnership, we are addressing a familiar challenge in industry when operational and information technology converge.

John Markus Lervik, CEO of Cognite, said: Working with Wood presents a fantastic opportunity for us to deliver value faster and at scale by playing to each of our strengths. The partnership embraces scalable innovation and value realisation which is accelerated by combining what both Wood and Cognite are best known for in the market.

Through Cognite Data Fusion, data will be transformed from siloed raw information into meaningful digital insights in real-time, to make faster and better-informed business and operational decisions.

Adding Cognites advanced AI data contextualisation and operations product to Woods technology partnership ecosystem is an exciting step as we innovate in connected operations solutions, said Darren Martin, Woods Chief Technology Officer. This collaboration will further enable us to meet the ambitions of our clients and empower them to be future ready now.

Read the article online at: https://www.hydrocarbonengineering.com/refining/20112020/wood-and-cognite-unite-to-unlock-artificial-intelligence-solutions-for-industrial-operations/

The rest is here:
Wood and Cognite unite to unlock artificial intelligence solutions for industrial operations - Hydrocarbon Engineering

Onit Acquires New Zealand-based McCarthyFinch to Drive Innovation with Artificial Intelligence and Workflow Automation – GlobeNewswire

The Next Generation of Onit is AI

Onit Acquires McCarthyFinch and Launches Precedent, an AI Legal Platform that Reads, Writes and Reasons like a Lawyer

HOUSTON, Nov. 17, 2020 (GLOBE NEWSWIRE) -- Onit, Inc., a leading provider of enterprise workflow solutions including enterprise legal management, contract lifecycle management and workflow automation, today announced that the company has acquired McCarthyFinch and its artificial intelligence platform that accelerates contract reviews and approvals by up to 70% and increases user productivity by more than 50%.

The acquisition reinforces Onits innovation strategy to deliver powerful AI-based workflow and business process automation solutions. The company plans to further its innovation through AI by evolving its product offerings as well as the software provided by its legal operations management software subsidiary SimpleLegal.

The technology will become an integral component of Onits new artificial intelligence platform Precedent and the companys first release on the platform will be ReviewAI.

Our vision is to build AI into our workflow platform and every product across the Onit and SimpleLegal product portfolios, stated Eric M. Elfman, Onit CEO and co-founder. AI will have an active role in everything from enterprise legal management to legal spend management and contract lifecycle management, resulting in continuous efficiencies and cost savings for corporate legal departments. Historically, legal departments have been thought of as black boxes where requests go in and information, decisions or contracts come out with no real transparency. AI has the potential to enhance transparency and contribute to stronger enterprisewide business collaboration in a way that conserves a lawyers valuable time.

McCarthyFinchs breadth of AI expertise from lawyers, technologists and data scientists speaks to the ever-evolving needs of the legal profession and Onit customers.

AI is a natural extension of our evolution, continued Elfman. In addition to acquiring award-winning technology, we have gained some of the brightest minds in the AI space.

Nick Whitehouse, McCarthyFinchs CEO and co-founder, is now the general manager of the newly rebranded Onit AI Center of Excellence. He has focused on digital innovation and AI for more than 15 years and was recognized in 2019 as the IDC DX Leader of the Year for his advocacy across the legal industry and Australasia. He is joined by McCarthyFinchs vice president of legal, Jean Yang, who is now vice president of the Onit AI Center of Excellence.

McCarthyFinch has been dedicated to building world-leading AI that augments lawyers and helps automate low-value and time-intensive manual legal processes. Drafting contracts and redlining documents shouldnt take up 70% of a lawyers time, as statistics suggest. Theres a better way to work, stated Whitehouse. With AI, weve dramatically changed the contract management lifecycle and enabled businesses to move faster, provide higher-quality services and lower the cost of legal services. We are excited to join the Onit team and apply AI to Onits contract lifecycle management solution and expansive product offerings.

Onit Is AI: Introducing Precedent and ReviewAI

Onits new intelligence platform, Precedent, is uniquely positioned to complement its existing workflow automation platform, Apptitude, and drive AI and digital transformation in the legal market. The Precedent intelligence platform reads, writes and reasons like a lawyer, enabling legal and business professionals to get more work done faster. It combines machine learning and natural language processing so legal teams can automate tasks and processes to make them more efficient, cost-effective and faster.

The first release on the Precedent intelligence platform, ReviewAI, focuses on pre-signature contract review. Law departments need a rapid path through drafting and negotiation to contract closure so they can accelerate the pace of doing business, increase contract compliance and enhance employee productivity. Using ReviewAI, lawyers can streamline intelligent activities like contract creation, redlining, complex negotiations and risk rating contracts on their terms. Through Precedent, ReviewAI learns from the vast inventory of a companys contracts, leverages the companys playbook and presents the results in a Microsoft Word plug-in so the legal team can work where it is accustomed to operating. Legal and contract teams can save up to 70% on review time, increase contract compliance and lower company risk.

To learn more about the acquisition, listen to the Onit podcastfeaturing Elfman and Whitehouse and visit us online.

About Onit

Onit is a global leader of workflow and artificial intelligence platforms and solutions for legal, compliance, sales, IT, HR and finance departments. With Onit, companies can transform best practices into smarter workflows, better processes and operational efficiencies. With a focus on enterprise legal management, matter management, spend management, contract lifecycle management and legal holds, the company operates globally and helps transform the way Fortune 500 companies and billion-dollar corporate legal departments bridge the gap between systems of record and systems of engagement. Onit helps customers find gains in efficiency, reduce costs and automate transactions faster. For more information, visit http://www.onit.com or call 1-800-281-1330.

Media inquiries: Melanie BrennemanOnit(713) 294-7857Melanie.brenneman@onit.com

A video accompanying this announcement is available athttps://www.globenewswire.com/NewsRoom/AttachmentNg/445bbe2c-1b26-45b3-bcbc-75177d7e5960

Original post:
Onit Acquires New Zealand-based McCarthyFinch to Drive Innovation with Artificial Intelligence and Workflow Automation - GlobeNewswire

artificial intelligence assistant uses face recognition and thermal scanning to screen for COVID-19 – Vision Systems Design

An artificial intelligence system originally designed to greet event attendees has evolved into a COVID-19 screening system that protects Canadas largest and most valuable collection of operational, historic military vehicles.

Master Cpl. Lana, an AI assistant developed by CloudConstable (Richmond Hill, Ontario, Canada; http://www.cloudconstable.com) utilizes an Intel (Santa Clara, CA, USA; http://www.intel.com) RealSense 415 3D depth camera and a FLIR (Wilsonville, OR, USA; http://www.flir.com) Lepton 2.5 thermal imaging module to greet volunteers at the Ontario Regiment Museum (Oshawa, Ontario, Canada; http://www.ontrmuseum.ca) and screen them for COVID-19 infection.

The museum originally intended to deploy the AI as a greeter at the museums front entrance, or to provide supplemental information at exhibits (Figure 1). The COVID-19 outbreak forced the museum to temporarily close for visitors, but volunteers still had to continue performing maintenance on the museums collection, however, and a second deployment of the technology was installed inside a vestibule located in the vehicle garage.

The systems hardware attaches to several brackets on a wall mount assembly using pieces of wall track. The Lepton 2.5 module, which connects to the platform using a USB2 cable, sits in a custom-fabricated housing placed above the screen. The RealSense 415 camera, which connects with the platform via USB3 cable, mounts to the same housing.

The housing attaches to a servomechanism designed with off-the-shelf parts that controls a pan/tilt mount. If the subjects face is not entirely within the cameras FOV, as determined by the systems face detection inference models, the software issues servo motion control commands in real time from the hardware platform via serial over USB API calls, to adjust the camera position until the subjects face is clearly visible An array with speaker and microphone sits below the screen.

Related: Artificial intelligence software expands capabilities of Boston Dynamics Spot robot

The company first experimented with using webcams for the system, but the cameras lacked depth sensing capability, according to Michael Pickering, President and CEO at CloudConstable. The system only interacts with users standing within approximately two yards of the screen. Doing so protects the privacy of anyone passing within the cameras FOV but not interacting with the system.

Other camera options evaluated include the Kinect Azure from Microsoft (Redmond, WA, US; http://www.microsoft.com), which had the advantage of a built-in microphone array, and several models of depth camera from ASUS (Beitou District, Taipei, Taiwan; http://www.asus.com). CloudConstable had difficulty finding any of these cameras readily available in Canada, however, and chose the RealSense camera.

Affordability drove the selection of the FLIR Lepton 2.5 module, capable of radiometric calibration with an acceptable resolution for the application, as well as the modules readily available API and SDK, says Pickering.

The AIs platform, an Intel NUC 9 Pro Kit, a PC with a 238 x 216 x 96 mm footprint, mounts behind the screen. The NUC 9 Pro includes an Intel Xeon E-2286M processor, 16 GBof DDR4-2666 memory, and an integrated UHD Graphics P630 GPU. CloudConstable chose the PC for its ability to also run a discrete GPU, in this case an ASUS Dual GeForce RTX 2070 MINI 8 GB GDDR6, to dedicate to graphics processing and ensure smooth, realistic animations. This allows inference processes to run strictly on the integrated GPU. The NUC 9 Pro also includes remote management software, allowing the company to provide off-site support.

Ambient light proves sufficient at most deployments of the AVA system, says Pickering. A simple LED light can provide extra illumination if required, such as inside the vestibule where museum volunteers go through their automated COVID-19 screening.

Volunteers stand in front of a high-definition ACER (San Jose, CA, USA; http://www.acer.com) display, on which Master Cpl. Lana appears (Figure 2). Pickering notes that the system supports multiple display types, however. The AI asks and the volunteer answers a set of COVID-19 screening questions, such as whether the volunteer is experiencing symptoms or has been exposed to anyone with the illness. The system then measures the volunteers skin temperature using the thermal imaging module.

If the volunteer correctly answers the screening questions and passes the temperature scan, they are checked in by the system and proceed into the museum for their shift. According to Jeremy Blowers, executive director of the Ontario Regiment Museum, the procedure takes less than 60 seconds to complete.

If the screening questions are not answered correctly or the temperature scan fails, the system sends an SMS message to managers phones informing them that a person in the facility has failed the COVID screening. The user does not learn that the screening failed, for fear of creating alarm. For example, a previous iteration of the system displayed on the monitor a live infrared image. Blowers asked CloudConstable to remove the image in case it showed elevated temperatures and upset the volunteer.

Related: Contactless temperature screening stations deployed in Chinese and Korean universities

In the case of a fail result, a human employee delivers a second set of screening questions. They also give the volunteer time to cool down, to account for artificially elevated skin temperatures after working outside on a hot day, for example. A second temperature scan with a hand device then takes place and management decides whether or not to allow the volunteer access to the building.

If volunteers want the system to recognize them, they must register with the software and allow to the system to learn what their face looks like. A video teaches volunteers how to work with the AI in order to allow her to recognize them, for instance by taking off their hats, eyeglasses, and/or masks during the registration process, says Blowers. Lana surprised museum staff by learning within two weeks how to recognize registered volunteers even if they had their masks on, Blowers adds

Once a volunteer registers, the AI greets them, informs them they are checked in, and thanks them for volunteering at the museum, all by name. Museum management receives compiled reports on check-in, check-out, and total volunteer hours on site.

AVAs development began in the fall of 2018 using the Intel distribution of OpenVINO toolkit, open source software designed to optimize deep learning models from pre-built frameworks and ease the deployment of inference engines onto Intel devices. CloudConstable used pre-trained convolutional neural network models for face detection and head pose detection that the company supplemented with a rules-based algorithm based on the inference results from the head pose model.

Because the AI only asks yes or no questions during the COVID-19 screening, Microsoft Azures speech-to-text API suits this and other AVA deployments, says Pickering. Head pose detection algorithms can also determine whether the volunteer nods or shakes their heads and translate the motion as a yes or no answer respectively.

All data generated by interacting with the volunteers, including the answers given to the screening questions and thermal scan results, stores on the Microsoft Azure cloud service.

No false negative cases in COVID-19 detection results exist to date, verified by a lack of reported cases among staff or their families, according to Blowers. False alarms have occurred, however, including two cases where volunteers were working outside in 42 C weather while wearing black hats, which elevated their skin temperature.

CloudConstable currently experiments with using the Intel RealSense 455 model for future AVA deployments. The camera has a wider FOV than the RealSense 415 and therefore presents less of a challenge for tall users. Both cameras use the same SDK such that the 455 can swap out with the 415 without any required software updates. The larger 455 model does require a larger mount than the 415 model, however.

See the original post:
artificial intelligence assistant uses face recognition and thermal scanning to screen for COVID-19 - Vision Systems Design

Job Ads for AI Could Soon Look Like This. Are You Ready? – ExtremeTech

This site may earn affiliate commissions from the links on this page. Terms of use.

Wanted: Human Assistant to the Artificial Intelligence

We are seeking junior and mid-level human applicants to serve as data science assistants to our departmental artificial intelligence (AI) in charge of data analytics. Responsibilities include reviewing, interpreting, and providing feedback about analytics results to the AI, and writing summary reports of AI results for human communication. Requires ability to interact with vendors and information technology staff to provide hardware support for the AI. Experience collaborating with computer-based staff a plus. Must have good human-computer interaction skills. Formal training in the ethical treatment of computers and assessment of the fairness and bias of computer-generated results preferred.

The above is a job advertisement from the future but not that far into it. It points to where we are going, and where we could be in maybe even as few as five years if we devote the resources and resolution to do the necessary research. But our recent past has shown us that we can develop the type of machines that would soon open up a whole new field of lucrative and fulfilling work.

See, over the last decade, a new computer science discipline called automated machine learning, or AutoML, has rapidly developed. AutoML grew organically in response to the many challenges of applying machine learning to the analysis of big data for the purpose of making predictions about health outcomes, economic trends, device failures, and any number of things in a wide field that are best served when rapid and comprehensive data can be analyzed.

For run-of-the-mill machine learning to work, an abundance of choices is required, ranging from the optimal method for the data being analyzed, and the parameters that should be chosen therein. For perspective, there are dozens of popular machine learning methods, each with thousands or millions of possible settings. Wading through these options can be daunting for new users and experts alike.

The promise of AutoML, then, is that the computer can find the optimal approachautomatically, significantly lowering the barrier of entry.

So how do we get to AutoML and to the job advertisement above? There are several hurdles.

The first is persistence. An artificial intelligence (AI) for AutoML must be able to analyze data continuously and without interruption. This means the AutoML AI needs to live in a robust, redundant, and reliable computing environment. This can likely be accomplished using currently available cloud computing platforms. The key advance is modifying the software to be persistent.

The second hurdle is memory and learning. An AutoML AI must have a memory of all machine learning analyses it has run and learn from that experience.PennAI, which my colleagues and I developed,is an example of an open-source AutoML tool that has both, but there arent many others. An importance would be to give AutoML the ability to learn from failure. Its current tools all learn from successes, but humans learn more from failure than success. Building this ability into AutoML AI could be quite challenging but necessary.

The third hurdle is explainability. A strength of human-based data science is our ability to ask each otherwhy. Why did you choose that algorithm? Why did you favor one result over another? Current AutoML tools do not yet allow the user to ask.

The final hurdle is human-computer interaction (HCI). What is the optimal way for a human to interact with AI doing data analytics? What is the best way for a human to give an AI feedback or provide it with knowledge? While we have made great progress in the general space of HCI, our knowledge of how to interact with AIs remains in its infancy.

It is entirely conceivable that an AI for AutoML could be built within the next few years that is persistent and can learn from experience, explain the decisions it makes as well as the results it generates, interact seamlessly with humans, and efficiently incorporate and use expert knowledge as it tries to solve a data science problem. These are all active areas of investigation and progress will depend mostly on a dedicated effort to bring these pieces together.

All that said, automated and persistent AI systems will find their place in the near future, once we make a concerted effort to thoroughly research it. We should start preparing our human-based workforce for this reality. We will need vocational programs to train humans how to interact with a persistent AI agent, in much the same way that we have programs to train others who work with and interpret specialized equipment, such as emergency room technicians. There will also need to be an educational culture shift on top of that training, as we will need to integrate AI interaction into courses covering communication, ethics, psychology, and sociology.

This technology is very much within reach. When we do reach it, well have a new, expansive field for human workers. Soon, it will be time to write a job description, but only once we figure out some crucial problems.

Now Read:

Link:
Job Ads for AI Could Soon Look Like This. Are You Ready? - ExtremeTech