Archive for the ‘Artificial Intelligence’ Category

Predicting Traffic Crashes Before They Happen With Artificial Intelligence – SciTechDaily

A deep model was trained on historical crash data, road maps, satellite imagery, and GPS to enable high-resolution crash maps that could lead to safer roads.

Todays world is one big maze, connected by layers of concrete and asphalt that afford us the luxury of navigation by vehicle. For many of our road-related advancements GPS lets us fire fewer neurons thanks to map apps, cameras alert us to potentially costly scrapes and scratches, and electric autonomous cars have lower fuel costs our safety measures havent quite caught up. We still rely on a steady diet of traffic signals, trust, and the steel surrounding us to safely get from point A to point B.

To get ahead of the uncertainty inherent to crashes, scientists from MITs Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Qatar Center for Artificial Intelligence developed a deep learning model that predicts very high-resolution crash risk maps. Fed on a combination of historical crash data, road maps, satellite imagery, and GPS traces, the risk maps describe the expected number of crashes over a period of time in the future, to identify high-risk areas and predict future crashes.

A dataset that was used to create crash-risk maps covered 7,500 square kilometers from Los Angeles, New York City, Chicago and Boston. Among the four cities, L.A. was the most unsafe, since it had the highest crash density, followed by New York City, Chicago, and Boston. Credit: Image courtesy of MIT CSAIL.

Typically, these types of risk maps are captured at much lower resolutions that hover around hundreds of meters, which means glossing over crucial details since the roads become blurred together. These maps, though, are 55 meter grid cells, and the higher resolution brings newfound clarity: The scientists found that a highway road, for example, has a higher risk than nearby residential roads, and ramps merging and exiting the highway have an even higher risk than other roads.

By capturing the underlying risk distribution that determines the probability of future crashes at all places, and without any historical data, we can find safer routes, enable auto insurance companies to provide customized insurance plans based on driving trajectories of customers, help city planners design safer roads, and even predict future crashes, says MIT CSAIL PhD student Songtao He, a lead author on a new paper about the research.

Even though car crashes are sparse, they cost about 3 percent of the worlds GDP and are the leading cause of death in children and young adults. This sparsity makes inferring maps at such a high resolution a tricky task. Crashes at this level are thinly scattered the average annual odds of a crash in a 55 grid cell is about one-in-1,000 and they rarely happen at the same location twice. Previous attempts to predict crash risk have been largely historical, as an area would only be considered high-risk if there was a previous nearby crash.

To evaluate the model, the scientists used crashes and data from 2017 and 2018, and tested its performance at predicting crashes in 2019 and 2020. Many locations were identified as high-risk, even though they had no recorded crashes, and also experienced crashes during the follow-up years. Credit: Image courtesy of MIT CSAIL.

The teams approach casts a wider net to capture critical data. It identifies high-risk locations using GPS trajectory patterns, which give information about density, speed, and direction of traffic, and satellite imagery that describes road structures, such as the number of lanes, whether theres a shoulder, or if theres a large number of pedestrians. Then, even if a high-risk area has no recorded crashes, it can still be identified as high-risk, based on its traffic patterns and topology alone.

To evaluate the model, the scientists used crashes and data from 2017 and 2018, and tested its performance at predicting crashes in 2019 and 2020. Many locations were identified as high-risk, even though they had no recorded crashes, and also experienced crashes during the follow-up years.

Our model can generalize from one city to another by combining multiple clues from seemingly unrelated data sources. This is a step toward general AI, because our model can predict crash maps in uncharted territories, says Amin Sadeghi, a lead scientist at Qatar Computing Research Institute (QCRI) and an author on the paper. The model can be used to infer a useful crash map even in the absence of historical crash data, which could translate to positive use for city planning and policymaking by comparing imaginary scenarios.

The dataset covered 7,500 square kilometers from Los Angeles, New York City, Chicago, and Boston. Among the four cities, L.A. was the most unsafe, since it had the highest crash density, followed by New York City, Chicago, and Boston.

If people can use the risk map to identify potentially high-risk road segments, they can take action in advance to reduce the risk of trips they take. Apps like Waze and Apple Maps have incident feature tools, but were trying to get ahead of the crashes before they happen, says He.

Reference: Inferring high-resolution traffic accident risk maps based on satellite imagery and GPS trajectories by Songtao He, Mohammad Amin Sadeghi, Sanjay Chawla, Mohammad Alizadeh, Hari Balakrishnan and Samuel Madden, ICCV.PDF

He and Sadeghi wrote the paper alongside Sanjay Chawla, research director at QCRI, and MIT professors of electrical engineering and computer science Mohammad Alizadeh, ??Hari Balakrishnan, and Sam Madden. They will present the paper at the 2021 International Conference on Computer Vision.

Follow this link:
Predicting Traffic Crashes Before They Happen With Artificial Intelligence - SciTechDaily

Create And Scale Complex Artificial Intelligence And Machine Learning Pipelines Anywhere With IBM CodeFlare – Forbes

Pixabay

To say that AI is complicated is an understatement. Machine learning, a subset of artificial intelligence, is a multifaceted process that integrates and scales mountains of data that comes in different forms from various sources. Data is used to train machine learning models in order to develop insights and solutions from newly acquired related data. For example, an image recognition model trained with several million dog and cat photos can efficiently classify a new image as either a cat or a dog.

A better way to build and manage machine learning models

Project Codeflare

The development of machine learning models requires the coordination of many processes linked together with pipelines. Pipelines can handle data ingestion, scrubbing, and manipulation from varied sources for training and inference.Machine learning models use end-to-end pipelines to manage input and output data collection and processing.

To deal with the extraordinary growth of AI and its ever-increasing complexity, IBM created an open-source framework calledCodeFlareto deal with AIs complex pipeline requirements. CodeFlaresimplifies the integration, scaling, and acceleration of complex multi-step analytics and machine learning pipelines on the cloud.Hybrid cloud deployment is one of the critical design points for CodeFlare, which using OpenShift can be easily deployed from on-premises to public clouds to edge.

It is important to note thatCodeFlare is not currently a generally available product, and IBM has yet to commit to a timeline for it becoming a product. Nevertheless, CodeFlare is available as an open-source project.And, as an evolving project, some aspects of orchestration and automation are still work in progress. At this stage, issues can be reported through the public GitHub project. IBM invites community engagement through issue and bug reports, which will be handled on a best effort basis.

CodeFlares main features are:

Technology

CodeFlare is built on top of Ray, an open-source distributed computing framework for machine learning applications. According to IBM, CodeFlare extends the capabilities of Ray by adding specific elements to make scaling workflows easier. CodeFlare pipelines run on a serverless platform using IBM Cloud Code Engine and Red Hat OpenShift. This platform providesCodeFlare the flexibility to be deployed just about anywhere.

Emerging workflows

Emerging AI/ML workflows pose new challenges

CodeFlare can integrate emerging workflows with complex pipelines that require integration and coordination of different tools and runtimes. It is designed also to scale complex pipelines such as multi-step NLP, complex time series and forecasting, reinforcement learning, and AI-Workbenches. The framework can integrate, run, and scale heterogenous pipelines that use data from multiple sources and require different treatments.

How much difference does CodeFlare make?

According to theIBM Research blog, CodeFlare significantly increases the efficiency of machine learning. The blog states that a user used the framework to analyze and optimize approximately 100,000 pipelines for training machine learning models. CodeFlare cut the time it took to execute each pipeline from 4 hours to 15 minutes - an 18x speedup provided by CodeFlare.

The research blog also indicates that CodeFlare can save scientists months of work on large pipelines, providing the data team more time for productive and development work.

Wrapping up

Studies show that about75%of prototype machine learning models fail to transition to production status despite large investments in artificial intelligence. Several reasons for low conversion rates range from poor project planning to weak collaboration and communications between AI data team members.

CodeFlare is a purpose-built platform that provides complete end-to-end pipeline visibility and analytics for a broad range of machine learning models and workflows. It providesa more straightforward way to integrate and scale full pipelines while offering a unified runtime and programming interface.

For those reasons, despite the historical high AI model failure rates, Moor Insights & Strategy believes that machine learning models using CodeFlare pipelines will have a high percentage of machine learning models transition from experimental status to production status.

Analyst Notes:

Note: Moor Insights & Strategy writers and editors may have contributed to this article.

Moor Insights & Strategy, like all research and analyst firms, provides or has provided paid research, analysis, advising, or consulting to many high-tech companies in the industry, including 8x8, Advanced Micro Devices, Amazon, Applied Micro, ARM, Aruba Networks, AT&T, AWS, A-10 Strategies,Bitfusion, Blaize, Box, Broadcom, Calix, Cisco Systems, Clear Software, Cloudera,Clumio, Cognitive Systems, CompuCom, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Digital Optics,Dreamchain, Echelon, Ericsson, Extreme Networks, Flex, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Google (Nest-Revolve), Google Cloud, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM, Ion VR,Inseego, Infosys, Intel, Interdigital, Jabil Circuit, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation,MapBox, Marvell,Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco),Mesophere, Microsoft, Mojo Networks, National Instruments, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek,Novumind, NVIDIA, Nuvia, ON Semiconductor, ONUG, OpenStack Foundation, Oracle, Poly, Panasas,Peraso, Pexip, Pixelworks, Plume Design, Poly,Portworx, Pure Storage, Qualcomm, Rackspace, Rambus,RayvoltE-Bikes, Red Hat,Residio, Samsung Electronics, SAP, SAS, Scale Computing, Schneider Electric, Silver Peak, SONY,Springpath, Spirent, Splunk, Sprint, Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, TE Connectivity,TensTorrent,TobiiTechnology, T-Mobile, Twitter, Unity Technologies, UiPath, Verizon Communications,Vidyo, VMware, Wave Computing,Wellsmith, Xilinx, Zebra,Zededa, and Zoho which may be cited in blogs and research.

Read more:
Create And Scale Complex Artificial Intelligence And Machine Learning Pipelines Anywhere With IBM CodeFlare - Forbes

Transactions in the Age of Artificial Intelligence: Risks and Considerations – JD Supra

Artificial Intelligence (AI) has become a major focus of, and the most valuable asset in, many technology transactions and the competition for top AI companies has never been hotter. According to CB Insights, there have been over 1,000 AI acquisitions since 2010. The COVID pandemic interrupted this trajectory, causing acquisitions to fall from 242 in 2019 to 159 in 2020. However, there are signs of a return, with over 90 acquisitions in the AI space as of June 2021 according to the latest CB Insights data. With tech giants helping drive the demand for AI, smaller AI startups are becoming increasingly attractive targets for acquisition.

AI companies have their own set of specialized risks that may not be addressed if buyers approach the transaction with their standard process. AIs reliance on data and the dynamic nature of its insights highlight the shortcomings of standard agreement language and the risks in not tailoring agreements to address AI specific issues. Sophisticated parties should consider crafting agreements specifically tailored to AI and its unique attributes and risks, which lend the parties a more accurate picture of an AI systems output and predictive capabilities, and can assist the parties in assessing and addressing the risks associated with the transaction. These risks include:

Freedom to use training data may be curtailed by contracts with third parties or other limitations regarding open source or scraped data.

Clarity around training data ownership can be complex and uncertain. Training data may be subject to ownership claims by third parties, be subject to third-party infringement claims, have been improperly obtained, or be subject to privacy issues.

To the extent that training data is subject to use limitations, a company may be restricted in a variety of ways including (i) how it commercializes and licenses the training data, (ii) the types of technology and algorithms it is permitted to develop with the training data and (iii) the purposes to which its technology and algorithms may be applied.

Standard representations on ownership of IP and IP improvements may be insufficient when applied to AI transactions. Output data generated by algorithms and the algorithms themselves trained from supplied training data may be vulnerable to ownership claims by data providers and vendors. Further, a third-party data provider may contract that, as between the parties, it owns IP improvements, resulting in companies struggling to distinguish ownership of their algorithms prior to using such third-party data from their improved algorithms after such use, as well as their ownership and ability to use model generated output data to continue to train and improve their algorithms.

Inadequate confidentiality or exclusivity provisions may leave an AI systems training data inputs and material technologies exposed to third parties, enabling competitors to use the same data and technologies to build similar or identical models. This is particularly the case when algorithms are developed using open sourced or publicly available machine learning processes.

Additional maintenance covenants may be warranted because an algorithms competitive value may atrophy if the algorithm is not designed to permit dynamic retraining, or the user of the algorithm fails to maintain and retrain the algorithm with updated data feeds.

In addition to the above, legislative protection in the AI space has yet to fully mature, and until such time, companies should protect their IP, data, algorithms, and models, by ensuring that their transactions and agreements are specifically designed to address the unique risks presented by the use and ownership of training data, AI-based technology and any output data generated by such technology.

Originally posted here:
Transactions in the Age of Artificial Intelligence: Risks and Considerations - JD Supra

AI in Robotics: Robotics and Artificial Intelligence 2021 – Datamation

Artificial intelligence (AI) is driving the robotics market into various areas, including mobile robots on the factory floor, robots that can do a large number of tasks rather than being specialized on one, and robots that can stay in control of inventory levels as well as fetching orders for delivery.

Such advanced functionality has raised the complexity of robotics. Hence the need for AI.

Artificial intelligence provides the ability to monitor many parameters in real-time and make decisions. For example, in an inventory robot, the machine has to be able to know its own location, the location of all stock, know stock levels, work out the sequence to go and retrieve items for orders, know the location of other robots on the floor, be able to navigate around the site, know when a human is near and change course, take deliveries to shipping, keep track of everything, and more.

The mobile robot also has to interoperate with various shop floor systems, computer numerical control (CNC) equipment, and other industrial systems. AI helps all those disparate systems work together seamlessly by being able to process their various inputs in real-time and coordinate action.

The autonomous robotic market alone is worth around $103 billion this year, according to Rob Enderle, an analyst at Enderle Group. He predicts that it will more than double by 2025 to $210 billion.

It will only go vertical from there, Enderle said.

Thats only one portion of the market. Another hot area is robotic process automation (RPA). It, too, is being integrated with AI to deal with high-volume, repeatable tasks. By handing these tasks over to robots, labor costs are reduced, workflows can be streamlined, and assembly processes are accelerated. Software can be written, for example, to take care of routine queries, calculations, and record keeping.

Historically, two different teams were needed: one for robotics and another for factory automation. The robotics team consists of specialized technicians with their own programming language to deal with the complex kinematics of multi-axis robots. Factory automation engineers, on the other hand, use programmable logic controllers (PLCs) and shop floor systems that utilize different programming languages. But software is now on the market that brings these two worlds together.

Further, better software and more sophisticated hardware has opened the door to a whole new breed of robot. While basic models operate on two axes, the latest breed of robotic machine with AI is capable of movement on six axes. They can be programmed to either carry out one task, over and over with high accuracy and speed, or execute complex tasks, such as coating or machining intricate components.

See more: Artificial Intelligence Market

Hondas ASIMO has become something of a celebrity. This advanced humanoid robot has been programmed to walk like a human, maintain balance, and do backflips.

But now AI is being used to advance its capabilities with an eventual view toward autonomous motion.

The difficulty is no longer building the robot but training it to deal with unstructured environments, like roads, open areas, and building interiors, Enderle said.They are complex systems with massive numbers of actuators and sensors to move and perceive what is around them.

Sight Machine, the developer of a manufacturing data platform, has partnered with Nissan to use AI to perform anomaly detection on 300 robots working on an automated final assembly process.

This system provides predictions and root-cause analysis for downtime.

See more: Artificial Intelligence: Current and Future Trends

Siemens and AUTOParkit have formed a partnership to bring parking into the 21st century.

Using Siemens automation controls with AI, the AUTOParkit solution provides a safe valet service without the valet.

This fully automated parking solution can achieve 2:1 efficiency over a conventional parking approach, AUTOParkit says. It reduces parking-related fuel consumption by 83% and carbon emissions by 82%.

In such a complex system, specialized vehicle-specific hardware and software work together to provide smooth and seamless parking experience that is far faster than traditional parking. Siemens controls use AI to pull it all together.

Kawasaki has a large offering of robots that are primarily used in fixed installations. But now it is working on robotic mobility and that takes AI.

For stationary robots to work seamlessly with mobile robots, it is essential that they can exchange information accurately and without failure, said Samir Patel, senior director of robotics engineering, Kawasaki Robotics USA.

To meet such integration requirements, Kawasaki robot controllers offer numerous options, including EtherNet TCP/IP, EtherNet IP, EtherCat, PROFIBUS, PROFINET and DeviceNet. These options not only allow our robots to communicate with mobile robots, but also allow communication to supervisory servers, PLCs, vision systems, sensors, and other devices.

With so many data sources to communicate with and instantaneous response needed to provide operational efficiency and maintain safety, AI is needed.

Over time, each robot accumulates data, such as joint load, speed, temperature, and cycle count, which periodically gets transferred to the network server, Patel said. In turn. the server running an application, such as Kawasakis Trend Manager, can analyze the data for performance and failure prediction.

Sight Machine, in close cooperation with Komatsu, has developed a system that can rapidly analyze 500 million data points from 600 welding robots.

The AI-based system can provide early warning of potential downtime and other welding faults.

See more: Top Performing Artificial Intelligence Companies

See the article here:
AI in Robotics: Robotics and Artificial Intelligence 2021 - Datamation

Artificial intelligence: ‘The window to act is closing fast’ – The Irish Times

Artificial intelligence (AI) is a force for good that could play a huge part in solving problems such as climate change. Left unchecked, however, it could undermine democracy, lead to massive social problems and be harnessed for chilling military or terrorist attacks.

Thats the view of Martin Ford, futurist and author of Rule of the Robots, his follow-up to Rise of the Robots, the 2015 New York Times bestseller and winner of the Financial Times/McKinsey Business Book of the Year, which focused on how AI would destroy jobs.

In the new book, Ford, a sci-fi fan, presents two broad movie-based scenarios.

The first is a world based on Star Trek values, where Earths problems have been solved. Technology has created material abundance, eliminated poverty, cured most disease and addressed environmental issues. The absence of traditional jobs has not led to idleness or lack of dignity as highly educated citizens pursue rewarding challenges.

The alternative dystopian future is more akin to The Matrix, where humanity is unknowingly trapped inside a simulated reality.

The more dystopian outcome is the default, if we dont intervene. I can see massive increases in inequality and various forms of entertainment and recreation such as video gaming, virtual reality and drugs becoming attractive to a part of the population that has been left behind, he tells The Irish Times.

Fords extensive research for both books involved talking to a wide cross-section of those working on the frontiers of artificial intelligence. While the unpredicted Covid-pandemic punctuated the intervening years, most of what he wrote in 2015 has been amplified, he feels. Covid if anything has acted as an accelerant for AI and robotics, with enduring effects in areas such as remote working, social distance and hygiene.

On the positive side, AI has led to huge medical advances including the recent rapid Covid vaccine development and deployment. With the pace of innovation slowing in other areas, AI is potentially a game changer in areas such as the climate crisis, he says.

Worries of the bad effects of AI, however, permeate his thoughtful new volume on the subject.

Employment is one.

Virtually any job that is fundamentally routine or predicable in other words nearly any role where workers face similar challenges again and again has the potential to be automated in full or in part.

Studies suggest that as much as half of the US workforce is engaged in such work and that tens of millions of jobs could evaporate in the US alone. This wont just affect lower-skilled, low-wage workers, he warns. Predictable intellectual work is at especially high risk of automation because it can be performed by software, whereas manual labour, in contrast, requires a more expensive robot.

Ford is generally pessimistic that workers will be able to move up the value chain or move to areas less affected by the rise of AI. Some will, he acknowledges, but he wonders will truck drivers, for example, become robotics engineers or personal care assistants.

Moreover, many of the new opportunities being created are in the gig economy where workers typically have unpredictable hours and incomes, all of which points to rising inequality and dehumanising conditions for a large section of the workforce.

Surveillance is another issue of concern. He highlights the case of the use of an app developed by the firm Clearview AI in the US.

In February 2019, the Indiana State Police were investigating a case where two men got into a fight in a park, one pulled a gun, shot the other man and fled the scene. A witness had filmed the incident on a mobile phone and the police uploaded the images to a new facial-recognition system they had been experimenting with.

It generated an immediate match. The shooter had appeared in a social media video with a description that included his name. It took just 20 minutes to solve the crime even though the suspect had not been previously arrested and did not hold a drivers licence. When this was revealed along with other information about the firm, it ignited major data-privacy concerns.

Data privacy is one thing but the capacity for AI to generate deepfakes takes this to another level. Ford offers up a scenario in which a politicians voice could be imitated in the run-up to an election, planting comments that would deliberately damage their reputation. Spread virally on social media, it might be hard to undo the stickiness of this. How many people would hear the denial or choose to believe the fake was not authentic?

A sufficiently credible deepfake could literally shape the arc of history and the means to create such fabrications might soon be in the hands of political operatives, foreign governments or even mischievous teenager, he says. In the age of viral videos, social media shaming and cancel culture, virtually anyone could be targeted and have their careers and personal lives destroyed.

Because of its history of racial injustice, the US may be especially vulnerable to orchestrated social and political disruption, he observes. Weve seen how viral videos depicting police brutality can almost instantly lead to widespread protests and social unrest. It is by no means inconceivable that, at some point in the future, a video so inflammatory that it threatens to rend the very social fabric could be synthesised perhaps by a foreign intelligence agency.

There are smart people working on solutions. Sensity, for example, markets software it claims can detect most deepfakes but inevitably there will be an arms race between the poachers and gamekeepers. He likens this to the race between computer virus creators and those who sell cybersecurity solutions, one in which malicious actors tend to maintain a continuous edge.

An example of the difficulties in this area was highlighted by an experiment that found that simply adding four small rectangular black and white stickers to a stop sign tricked an image recognition system of the type used in self-driving cars into believing it was instead a 45mph speed-limit sign. A human observer might not even notice and certainly wouldnt be confused but AIs error could have fatal consequences.

Ford paints an even more terrifying scenario of lethal autonomous weapons. Consider the possibility of hundreds of autonomous drones swarming the US Capitol building in a co-ordinated attack. Using facial recognition technology, they seek and locate specific politicians, carrying out multiple targeted assassinations. This chilling vision was sketched out in a 2017 short film called Slaughterbots, produced by a team working under the direction of Stuart Russell, professor of computer science at the University of California, Berkeley, who has focused much of his recent work on the risks of AI.

This disturbing vision is quite realistic, he believes.

My own view is rather pessimistic. It seems to be that the competitive dynamic and lack of trust between major countries will probably make at least the development of fully autonomous weapons a near certainty. Every branch of the US military as well as nations including Russia, China, the United Kingdom and South Korea are actively developing drones with the ability to swarm.

Low barriers to entry mean that even small, under-resourced groups could also gain access to this type of warfare. Commercial drones could be easily modified, he explains. We have to worry about what human beings will choose to do with weapons that are no more intelligent than an iPhone, but which are ruthlessly competent at identifying, tracking and killing targets.

This is a near-term rather than long-term worry, he adds, and the window to act is closing fast.

AI needs to be the subject of regulation, he maintains, not by politicians in Congress or elsewhere but by specialist authorities in the same ways that financial markets are regulated.

Ford also worries about China and devotes a large section of the book to what he views as an AI arms race between China and the West. As well as concerns about privacy for its citizens and human rights for oppressed minorities, he worries about the capacity of China to export, not alone its all-pervasive AI technology to other regions but also its world view, which is very much at odds with western values.

Its going to become more Orwellian. To live in China will be to have every aspect of your life tracked. Maybe it will be like boiling a frog and people will not notice or care, but we certainly dont want that here [in the West].

One possible silver lining for Chinas citizens, he concedes, however, is that crime rates collapse with AI-based surveillance. Thats a trade-off that might just be worth considering.

Rule of the Robots: How artificial intelligence will transform everything, by Martin Ford, is published by Basic Books, New York.

Excerpt from:
Artificial intelligence: 'The window to act is closing fast' - The Irish Times