Archive for the ‘Artificial Intelligence’ Category

What the CSPC Has to Say About Artificial Intelligence – The National Law Review

Wednesday, March 31, 2021

American households are increasingly connected internally through the use of artificially intelligent appliances.1But who regulates the safety of those dishwashers, microwaves, refrigerators, and vacuums powered by artificial intelligence (AI)? On March 2, 2021, at a virtual forum attended by stakeholders across the entire industry, the Consumer Product Safety Commission (CPSC) reminded us all that it has the last say on regulating AI and machine learning consumer product safety.

The CPSC is an independent agency comprised of five commissioners who are nominated by the president and confirmed by the Senate to serve staggered seven-year terms. With the Biden administrations shift away from the deregulation agenda of the prior administration and three potential opportunities to staff the commission, consumer product manufacturers, distributors, and retailers should expect increased scrutiny and enforcement.2

The CPSC held the March 2, 2021 forum to gather information on voluntary consensus standards, certification, and product-specification efforts associated with products that use AI, machine learning, and related technologies. Consumer product technology is advancing faster than the regulations that govern it, even with a new administration moving towards greater regulation. As a consequence, many believe that the safety landscape for AI, machine learning, and related technology is lacking. The CPSC, looking to fill the void, is gathering information through events like this forum with a focus on its next steps for AI-related safety regulation.

To influence this developing regulatory framework, manufacturers and importers of consumer products using these technologies must understand and participate in the ongoing dialogue about future regulation and enforcement. While guidance in these evolving areas is likely to be adaptive, the CPSCs developing regulatory framework may surprise unwary manufacturers and importers who have not participated in the discussion.

The CPSC defines AI as any method for programming computers or products to enable them to carry out tasks or behaviors that would require intelligence if performed by humans and machine learning as an iterative process of applying models or algorithms to data sets to learn and detect patterns and/or perform tasks, such as prediction or decision making that can approximate some aspects of intelligence.3To inform the ongoing discussion on how to regulate AI, machine learning, and related technologies, the CPSC provides the following list of considerations:

Identification: Determine presence of AI and machine learning in consumer products. Does the product have AI and machine learning components?

Implications: Differentiate what AI and machine learning functionality exists. What are the AI and machine learning capabilities?

Impact: Discern how AI and machine learning dependencies affect consumers. Do AI and machine learning affect consumer product safety?

Iteration: Distinguish when AI and machine learning evolve and how this transformation changes outcomes. When do products evolve/transform, and do the evolutions/transformations affect product safety?4

These factors and corresponding questions will guide the CPSCs efforts to establish policies and regulations that address current and potential safety concerns.

As indicated at the March 2, 2021 forum, the CPSC is taking some of its cues for its fledgling initiative from organizations that have promulgated voluntary safety standards for AI, including Underwriters Laboratories (UL) and the International Organization for Standardization (ISO). UL 4600 Standard for Safety for the Evaluation of Autonomous Products covers fully autonomous systems that move such as self-driving cars along with applications in mining, agriculture, maintenance, and other vehicles including lightweight unmanned aerial vehicles.5Using a claim-based approach, UL 4600 aims to acknowledge the deviations from traditional safety practices that autonomy requires by assessing the reliability of hardware and software necessary for machine learning, ability to sense the operating environment, and other safety considerations of autonomy. The standard covers topics like safety case construction, risk analysis, safety relevant aspects of the design process, testing, tool qualification, autonomy validation, data integrity, human-machine interaction (for non-drivers), life cycle concerns, metrics and conformance assessment.6While UL 4600 mentions the need for a security plan, it does not define what should be in that plan.

Since 2017, ISO has had an AI working group of 30 participating members and 17 observing members.7This group, known as SC 42, develops international standards in the area of AI and for AI applications. SC 42 provides guidance to JTC 1a specific joint technical committee of ISO and the International Electrotechnical Commission (IEC)and other ISO and IEC committees. As a result of their work, ISO has published seven standards that address AI-related topics and sub-topics, including AI trustworthiness and big data reference architecture.8Twenty-two standards remain in development.9

The CPSC might also look to the European Unions (EU) recent activity on AI, including a twenty-six-page white paper published in February 2020 that includes plans to propose new regulations this year.10On the heels of the General Data Protection Regulation, the EUs regulatory proposal is likely to emphasize privacy and data governance in its efforts to build[] trust in AI.11Other areas of emphasis include human agency and oversight, technical robustness and safety, transparency, diversity, non-discrimination and fairness, societal and environmental wellbeing, and accountability.12

***

Focused on AI and machine learning, the CPSC is contemplating potential new consumer product safety regulations. Manufacturers and importers of consumer products that use these technologies would be well served to pay attention toand participate infuture CPSC-initiated policymaking conversations, or risk being left behind or disadvantaged by what is to come.

-------------------------------------------------------

1SeeCrag S. Smith,A.I. Here, There, Everywhere, N.Y. Times (Feb. 23, 2021),https://www.nytimes.com/2021/02/23/technology/ai-innovation-privacy-seniors-education.html.

2Erik K. Swanholt & Kristin M. McGaver,Consumer Product Companies Beware! CPSC Expected to Ramp up Enforcement of Product Safety Regulations(Feb. 24, 2021),https://www.foley.com/en/insights/publications/2021/02/cpsc-enforcement-of-product-safety-regulations.

385 Fed. Reg. 77183-84.

4Id.

5Underwriters Laboratories,Presenting the Standard for Safety for the Evaluation of Autonomous Vehicles and Other Products,https://ul.org/UL4600(last visited Mar. 30, 2021). It is important to note that autonomous vehicles fall under the regulatory purview of the National Highway Traffic Safety Administration.SeeNHTSA,Automated Driving Systems,https://www.nhtsa.gov/vehicle-manufacturers/automated-driving-systems.

6Underwriters Laboratories,Presenting the Standard for Safety for the Evaluation of Autonomous Vehicles and Other Products,https://ul.org/UL4600(last visited Mar. 30, 2021).

7ISO, ISO/IEC JTC 1/SC 42,Artificial Intelligence,https://www.iso.org/committee/6794475.html(last visited Mar. 30, 2021).

8ISO, Standards by ISO/IEC JTC 1/SC 42,Artificial Intelligence,https://www.iso.org/committee/6794475/x/catalogue/p/1/u/0/w/0/d/0(last visited Mar. 30, 2021).

9Id.

10See Commission White Paper on Artificial Intelligence, COM (2020) 65 final (Feb. 19, 2020),https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf.

11European Commission, Policies,A European approach to Artificial Intelligence,https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence(last updated Mar. 9, 2021).

12Commission White Paper on Artificial Intelligence, at 9, COM (2020) 65 final (Feb. 19, 2020),https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf.

See the article here:
What the CSPC Has to Say About Artificial Intelligence - The National Law Review

Global Healthcare Artificial Intelligence (AI) Deals Report 2020: Details of the Latest AI Deals, Oligonucletides Including Aptamers Agreements…

DUBLIN, March 31, 2021 /PRNewswire/ -- The "Global Artificial Intelligence (AI) Partnering Terms and Agreements 2010 to 2020" report has been added to ResearchAndMarkets.com's offering.

This report contains a comprehensive listing of all artificial intelligence partnering deals announced since 2010 including financial terms where available including over 440 links to online deal records of actual artificial intelligence partnering deals as disclosed by the deal parties.

The report provides a detailed understanding and analysis of how and why companies enter artificial intelligencepartnering deals. The majority of deals are early development stage whereby the licensee obtains a right or an option right to license the licensors artificial intelligencetechnology or product candidates. These deals tend to be multicomponent, starting with collaborative R&D, and commercialization of outcomes.

This report provides details of the latest artificial intelligence, oligonucletides including aptamers agreements announced in the healthcare sectors.

Understanding the flexibility of a prospective partner's negotiated deals terms provides critical insight into the negotiation process in terms of what you can expect to achieve during the negotiation of terms. Whilst many smaller companies will be seeking details of the payments clauses, the devil is in the detail in terms of how payments are triggered - contract documents provide this insight where press releases and databases do not.

In addition, where available, records include contract documents as submitted to the Securities Exchange Commission by companies and their partners.

Contract documents provide the answers to numerous questions about a prospective partner's flexibility on a wide range of important issues, many of which will have a significant impact on each party's ability to derive value from the deal.

In addition, a comprehensive appendix is provided organized by artificial intelligence partnering company A-Z, deal type definitions and artificial intelligence partnering agreements example. Each deal title links via Weblink to an online version of the deal record and where available, the contract document, providing easy access to each contract document on demand.

The report also includes numerous tables and figures that illustrate the trends and activities in artificial intelligence partnering and dealmaking since 2010.

In conclusion, this report provides everything a prospective dealmaker needs to know about partnering in the research, development and commercialization of artificial intelligence technologies and products.

Report scope

Analyzing actual company deals and agreements allows assessment of the following:

Global Artificial Intelligence Partnering Terms and Agreements includes:

In Global Artificial Intelligence Partnering Terms and Agreements, the available contracts are listed by:

Key Topics Covered:

Executive Summary

Chapter 1 - Introduction

Chapter 2 - Trends in artificial intelligence dealmaking2.1. Introduction2.2. Artificial intelligence partnering over the years2.3. Most active artificial intelligence dealmakers2.4. Artificial intelligence partnering by deal type2.5. Artificial intelligence partnering by therapy area2.6. Deal terms for artificial intelligence partnering

Chapter 3 - Leading artificial intelligence deals3.1. Introduction3.2. Top artificial intelligence deals by value

Chapter 4 - Most active artificial intelligence dealmakers4.1. Introduction4.2. Most active artificial intelligence dealmakers4.3. Most active artificial intelligence partnering company profiles

Chapter 5 - Artificial intelligence contracts dealmaking directory5.1. Introduction5.2. Artificial intelligence contracts dealmaking directory

Chapter 6 - Artificial intelligence dealmaking by technology type

Chapter 7 - Partnering resource center7.1. Online partnering7.2. Partnering events7.3. Further reading on dealmaking

Appendices

For more information about this report visit https://www.researchandmarkets.com/r/ze6mu2

Media Contact:

Research and Markets Laura Wood, Senior Manager [emailprotected]

For E.S.T Office Hours Call +1-917-300-0470 For U.S./CAN Toll Free Call +1-800-526-8630 For GMT Office Hours Call +353-1-416-8900

U.S. Fax: 646-607-1907 Fax (outside U.S.): +353-1-481-1716

SOURCE Research and Markets

http://www.researchandmarkets.com

Read more from the original source:
Global Healthcare Artificial Intelligence (AI) Deals Report 2020: Details of the Latest AI Deals, Oligonucletides Including Aptamers Agreements...

A Solution for the Future Needs of artificial intelligence – ARC Viewpoints

Arm introduced the Armv9 architecture in response to the global demand for ubiquitous specialized processing with increasingly capable security and artificial intelligence (AI). Armv9 is the first new Arm architecture in a decade, building on the success of Armv8.

The new capabilities in Armv9 are designed to accelerate the move from general-purpose to more specialized compute across every application as AI, the Internet of Things (IoT) and 5G gain momentum globally.

To address the greatest technology challenge today securing the worlds data the Armv9 roadmap introduces the Arm Confidential Compute Architecture (CCA). Confidential computing shields portions of code and data from access or modification while in-use, even from privileged software, by performing computation in a hardware-based secure environment.

The Arm CCA will introduce the concept of dynamically created Realms, useable by all applications, in a region that is separate from both the secure and non-secure worlds. For example, in business applications, Realms can protect commercially sensitive data and code from the rest of the system while it is in-use, at rest, and in transit.

The ubiquity and range of AI workloads demands more diverse and specialized solutions. For example, it is estimated there will be more than eight billion AI-enabled voice-assisted devices in use by the mid-2020s, and 90 percent or more of on-device applications will contain AI elements along with AI-based interfaces, like vision or voice.

To address this need, Arm partnered with Fujitsu to create the Scalable Vector Extension (SVE) technology, which is at the heart of Fugaku, the worlds fastest supercomputer. Building on that work, Arm has developed SVE2 for Armv9 to enable enhanced machine learning (ML) and digital signal processing (DSP) capabilities across a wider range of applications.

SVE2 enhances the processing ability of 5G systems, virtual and augmented reality, and ML workloads running locally on CPUs, such as image processing and smart home applications. Over the next few years, Arm will further extend the AI capabilities of its technology with substantial enhancements in matrix multiplication within the CPU, in addition to ongoing AI innovations in its Mali GPUs and Ethos NPUs.

Go here to see the original:
A Solution for the Future Needs of artificial intelligence - ARC Viewpoints

NDA Automation: Get Better, Faster NDAs With the Help of Artificial Intelligence – JD Supra

Non-disclosure agreements (NDAs) are some of the most commonly drafted agreements at any company. While they may be common, however, that doesnt mean theyre unimportant in fact, theyre critical to protecting a companys business strategies and trade secrets.

Most companies use the same form NDA in almost every situation, changing only party names and the description of the confidential information involved, leaving the rest of the agreement to a series of standard terms. This means that, even though theyre important, NDAs are very repetitive and routine in terms of drafting.

Corporate legal departments have long been bogged down in routine contracts. Preparing NDAs can take up a significant amount of lawyers time, taking them away from other important work that can bring more value to the organization.

The routine nature of NDAs makes them a prime candidate for contract artificial intelligence. With the combination of AI and contracts, business users can engage in risk-free self-service to review and redline NDAs in less than two minutes. This frees up your legal staff to focus on higher-value work that helps support and grow the business.

AI is changing the game when it comes to routine contracts like NDAs. With AI, you can increase the speed of contract preparation and review while at the same time reducing your risk.

Onits ReviewAI software employs AI to quickly and accurately draft, review, redline, and edit all types of contracts, including NDAs, in a matter of minutes. ReviewAI isnt just for those with legal training non-legal business users can use ReviewAI to receive reviewed, redlined, and approved NDAs via email or a self-service portal in less than two minutes. This self-service option removes a huge burden from legals shoulders, freeing up valuable time for more complex legal matters.

For lawyers and contract professionals working on NDAs, ReviewAI offers a Word add-in that offers more hands-on functionality. The add-in automatically drafts, reviews, redlines and edits your NDAs against corporate standards. Youve likely invested time in crafting standardized language for your NDAs and defining exactly what constitutes confidential information and how its to be treated. ReviewAI will learn those terms and customize them based on user feedback, making your NDA applicable to whatever scenario youre addressing at a given moment.

ReviewAI is a game-changer because it contains NDA automation. The software empowers legal departments to review contracts 60-70% faster. It also leads to a 51.5% increase in user productivity, which is critical for making the most of your resources at a time when legal departments are under increased pressure to do more with less. With Review AI, it takes two minutes or less to review and redline a contract and also offers:

ReviewAI handles the entire pre-signature phase for NDAs. This dramatically reduces your contract lead time while decreasing your legal costs.

NDAs and other routine, repetitive contracts shouldnt take attorney time and focus away from higher-value legal work. Tools that combine AI and contracts to produce NDA automation take these time-consuming tasks off your lawyers plates and also empower your business users to engage in self-service without increasing risk.

Continue reading here:
NDA Automation: Get Better, Faster NDAs With the Help of Artificial Intelligence - JD Supra

Artificial Intelligences Impact On Jobs Is Nuanced – Forbes

AI will shift tasks around,

Well, is artificial intelligence a job-killer or not? We keep hearing both sides, from projections of doom for many professions that will necessitate things such as universal basic income to help sidelined workers, to projections of countless unfilled jobs needed to build and manage AI-powered enterprises. For a worker losing his or her job to automation, knowing that an AI programming job is being created elsewhere is of little solace.

Perhaps the reality will be somewhere in between. An MIT report released at the end of last year states recent fears about AI leading to mass unemployment are unlikely to be realized. Instead, we believe thatlike all previous labor-saving technologiesAI will enable new industries to emerge, creating more new jobs than are lost to the technology, the reports authors, led by Thomas Malone, director of the MIT Center for Collective Intelligence, conclude. But we see a significant need for governments and other parts of society to help smooth this transition, especially for the individuals whose old jobs are disrupted and who cannot easily find new ones.

The future of AI and job growth or losses may be nuanced, a recent report from BCG and Faethm suggests. Though these technologies will eliminate some jobs, they will create many others, the reports team of authors, led by BCGs Rainer Strack. Governments, companies, and individuals all need to understand these shifts when they plan for the future.

What needs to be understood? For starters, the net number of jobs lost or gained is an artificially simple metric to gauge the impact of digitization, Strack and his co-authors state. For example, eliminating 10 million jobs and creating 10 million new jobs would appear to have negligible impact. In fact, however, doing so would represent a huge economic disruption for the countrynot to mention for the millions of people with their jobs at stake.

Theres even a paradox in play. Computers tend to perform well in tasks that humans find difficult or time-consuming to do, but they tend to work less effectively in tasks that humans find easy to do, the report notes. Also, in many areas, technologies will improve the quality of work that humans do by allowing them to focus on more strategic, value-creating, and personally rewarding tasks.

In other words, AI cant take over many of the soft skills essential to businesses growth initiative, intuition, passion, and ability to sell ideas and concepts. Add that to more technical abilities needed to build and maintain AI and digital environments and keep them focused on what the business needs. In many sectors, severe shortages of skilled workers will mean that growth in demand for talent will be unmet, Strack and his co-authors state. This is particularly true for computer-related occupations and jobs in science, technology, engineering, and math, since technology is fueling the rise of automation across all industries. This is why the computer and mathematics job family group is likely to suffer by far the greatest worker deficits.

At the same time, there will also be increasing demand for jobs requiring compassionate human contact, such as health care, social services, and teaching, they add.

Along with the BCG-Faethms observations, it should be noted that AI cannot replicate the entrepreneurial skills that will be pulling together technology solutions and platforms to connect to the needs of markets. Humans are the innovators.

What to do? Strack and his team urge people to take charge of their professional development through lifelong learning. Individuals will have to take greater responsibility for their own professional development, whether that means through upskilling or reskilling, they state. Pay attention to sources of information and update skills accordingly, either by searching out high-quality providers of education or by charting your own course amid the vast amount of online-learning offers.

The BCG-Faethm team also makes the following recommendations from a corporate perspective:

Read more:
Artificial Intelligences Impact On Jobs Is Nuanced - Forbes