Archive for the ‘Artificial Intelligence’ Category

Artificial intelligence has begun to exceed expectations | Mint – Mint

In an earlier article I wondered whether a computer would ever be able to write an article that was so good that it would be difficult to tell it apart from a human-written article. That time might have already come.

In 2020 The Guardian published an article that had been written by AI. It was about the increasing use of AI in journalism, and how it is changing the landscape of the industry. It discussed how AI is being used to generate news stories, and how it is being used to help reporters with their work. It was so natural that it was hard to believe that it was written by a software called GPT-3 developed by OpenAI, a research company.

The Guardian isnt the only news organization using algorithms to write articles. The Associated Press has been using an algorithm to write short articles about company earnings reports for the past few years. In 2015, Forbes started using an algorithm to write short articles about public companies. That said, some news organizations are hesitant to use algorithms to write articles because they worry that the articles will lack the human touch that readers crave. But as algorithms become more sophisticated, that is unlikely to be a problem for much longer.

Journalists have always relied on their own skills and knowledge to produce articles but with the advent of artificial intelligence, journalists will need to up their game in order to stay relevant in the future. Some believe that this will lead to the demise of journalism as we know it. Others argue that it will lead to a more efficient and effective form of journalism.

If artificial intelligence can have such an impact on journalism can you imagine what it will do to professions like law. In some areas the impact of artificial intelligence is already being felt. For example, law firms already use AI to help with the discovery process in litigation, and to automate the drafting of simple documents like contracts. In the future, AI may be used to help with more complex tasks, such as analysing large amounts of data to predict the outcome of a case, or providing expert advice on specific legal issues.

For artificial intelligence to be used to provide legal advice that can be used in court, its not just a question of whether the technology is good enough but whether it can be trusted. Its not just about whether the technology is accurate but if it is biased.

There is a long history of artificial intelligence being used in ways that are racist and sexist," said Meredith Broussard, a professor of data and journalism at the Arthur L. Carter Journalism Institute at New York University and the author of Artificial Unintelligence: How Computers Misunderstand the World. If you train an artificial intelligence system on data that is racist and sexist, the artificial intelligence system is going to be racist and sexist," she said. For example, an artificial intelligence system that is trained on data from arrest records is going to be biased against people of colour because they are more likely to be arrested than white people. And an artificial intelligence system that is trained on data from job applications is going to be biased against women because they are more likely to be unemployed than men.

That said, there is a lot of talk about the potential of artificial intelligence to transform the practice of law. One of the most promising applications is to use it to analyse large amounts of data and identify patterns that human lawyers might not be able to see. For example, if a lawyer is trying to determine whether a client is likely to default on a loan, AI could help to identify patterns in the clients behaviour that may indicate that they are at risk of defaulting.

Another potential benefit of AI is that it could help lawyers to improve their communication with clients by helping lawyers to understand the emotions and intentions of their clients. For example, if a lawyer is trying to persuade a client to accept a settlement offer, AI could help to identify the clients emotional state to see whether they are likely to be receptive to the offer. This information could then be used to help the lawyer to tailor their communication appropriately.

In the end, AI is a tool and can never replace humans. What it can do is make us more efficient. Isnt that what we all want?

If you have read till here, and remain sceptical of what AI can do, what would you say if I told you that, barring some light editing for context and continuity, everything in this article right up to the preceding paragraph was generated by an AI-based writing software? Other than a few prompts I provided to nudge the article in different directions, every single idea, all the research and even the manner in which it was presented was generated out of OpenAIs GPT-3 algorithm.

If all this seems impressive, we are just scratching at the surface of what AI can achieve. Even in the few short hours it took me to produce this piece, I could see myself getting better at making the software do exactly what I wanted it to. With practice, I have no doubt that we (the software and I) will produce content that is indistinguishable from anything I could have come up with on my own. And in a fraction of the time.

At the same time, we must recognize the current generations of technologies for what they aretools that will help us be more efficient at what we do. As good as they are, these AI algorithms are still no substitute for human intelligence and creativity.

Rahul Matthan is a partner at Trilegal and also has a podcast by the name Ex Machina. His Twitter handle is @matthan

More:
Artificial intelligence has begun to exceed expectations | Mint - Mint

Pace Of Artificial Intelligence Investments Slows, But AI Is Still Hotter Than Ever – Forbes

AI's future is commercial.

In line with a rocky and uncertain economic climate, the pace of investments flowing into the red-hot artificial intelligence technology space has cooled somewhat this past year. Things are still red hot, however, and AI is seeing a lot of progress, mitigated by concerns over safety and responsibility. Interestingly, much of its development has moved out of labs and into commercial ventures.

These are the conclusions drawn by two leading venture capitalists in the tech space, Nathan Benaich of Air Street Capital and Ian Hogarth of Plural, outlined in their annual summary of the state of AI. The report covers all facets of AI, from developments with DeepMind to NVIDIAs rapidly expanding processing capabilities. There are also numerous implications for AI from a business perspective.

For starters, it turns out that 2021 was a banner year for the AI business sector, but then softened in 2022. In 2022, investment in startups using AI has slowed down along with the broader market. Private companies using AI are expected to raise 36% less money in 2022 versus the previous year, but are still on track to exceed the 2020 level. This is comparable with the investment in all startups and scaleups worldwide, they observe. In addition, they note, enterprise software is the most invested category globally, while robotics captures the largest share of VC investment into AI.

At the same time, there has been a softening, though less extreme, for investments in SaaS startups and scaleups using AI expected to reach $41.5 billion by the end of the year, down 33% from last year. This is still higher than in 2020 VC investment in AI SaaS startups and scaleups.

Significantly, the reports co-authors observe, there has also been a drying up of academic research in AI as multi-year project funding concludes, with much of the research now shifted to the commercial sector. That means more startups and scaleups on the horizon. Once considered untouchable, talent from Tier 1 AI labs is breaking loose and becoming entrepreneurial, Benaich and Hogarth state. Alums are working on AGI, AI safety, biotech, fintech, energy, dev tools and robotics.

They add that hiring freezes and the disbanding of AI labs precipitates the formation of many startups from giants including DeepMind and OpenAI. Even the large tech behemoths are seeing some loss of talent to startups. Meta, for example, is folding their centralized AI research group after letting it run free from product roadmap pressure for almost 10 years. In addition, all but one author of the landmark paper that introduced transformer-based neural networks have left Google to build their own startups in artificial general intelligence, conversational agents, AI first biotech and blockchain, they note. For example, they relate, AnthropC raised $580 million in 2022, Inflection raised $225 million, and co:here raised $125 million.

Worldwide Investment in Startups and Scaleups Using AI:

Benaich and Hogarth also looked at the prevalence of AI unicorns emerging across nations of the world. concluding the United States leads in these high-potential startups, followed by China and the United Kingdom. A total 292 AI unicorns emerged within the US in 2022, with a combined enterprise value of $4.6 trillion. Overall, they add, despite significant drop in investment in US based startups and scaleups using AI, they still account for more than half of the AI investment worldwide.

Also in 2022, the big tech companies continued to expand their AI clouds and form large partnerships with AI startups, Benaich and Hogarth state. The hyperscalers and challenger AI compute providers are tallying up major AI compute partnerships, notably Microsofts $1 billion investment into OpenAI. We expect more to come.

For the year ahead, Benaich and Hogarth predict more than $100 million will be invested in dedicated AI-alignment organizations in the next year as more people become aware of the risk we are facing by letting AI capabilities run ahead of safety. In addition, they predict that a major user-generated content side will negotiate a commercial settlement with a startup producing AI models (such as OpenAI) for training on their corpus of user generated content.

Excerpt from:
Pace Of Artificial Intelligence Investments Slows, But AI Is Still Hotter Than Ever - Forbes

New ISBN publication – ARTIFICIAL INTELLIGENCE AND EDUCATION – Council of Europe

Artificial intelligence (Al) is increasingly having an impact on education, bringing opportunities as well as numerous challenges.

These observations were noted by the Council of Europes Committee of Ministers in 2019 and led to the commissioning of this report, which sets out to examine the connections between Al and education (AI&ED).

In particular, the report presents an overview of AI&ED seen through the lens of the Council of Europe values of human rights, democracy and the rule of law; and it provides a critical analysis of the academic evidence and the myths and hype.

The Covid-19 pandemic school shutdowns triggered a rushed adoption of educational technology, which increasingly includes AI-assisted classrooms tools (AIED).

This AIED, which by definition is designed to influence child development, also impacts on critical issues such as privacy, agency and human dignity all of which are yet to be fully explored and addressed.

But AI&ED is not only about teaching and learning with AI, but also teaching and learning about AI (AI literacy), addressing both the technological dimension and the often-forgotten human dimension of AI.

The report concludes with a provisional needs analysis the aim being to stimulate further critical debate by the Council of Europes member states and other stakeholders and to ensure that education systems respond both proactively and effectively to the numerous opportunities and challenges introduced by AI&ED.

Download the provisional edition of thispublication

Read this article:
New ISBN publication - ARTIFICIAL INTELLIGENCE AND EDUCATION - Council of Europe

Everyone Wants Responsible Artificial Intelligence, Few Have It Yet – Forbes

With great power comes great responsibility.

As artificial intelligence continues to gain traction, there has been a rising level of discussion about responsible AI (and, closely related, ethical AI). While AI is entrusted to carry more decision-making workloads, its still based on algorithms that respond to models and data, as I and my co-author Andy Thurai explain in a recent Harvard Business Review article. As a result, AI and often misses the big picture and most times cant analyze the decision with reasoning behind it. It certainly isnt ready to assume human qualities that emphasize empathy, ethics, and morality.

Is this a concern that is shared within the executive suites of companies deploying AI? Yes, a recent study of 1,000 executives published by MIT Sloan Management Review and Boston Consulting Group confirms. However, the study finds, while most executives agree that responsible AI is instrumental to mitigating technologys risks including issues of safety, bias, fairness, and privacy they acknowledged a failure to prioritize it. In other words, when it comes to AI, its damn the torpedoes and full speed ahead. However, more attention needs to paid to those torpedoes, which may take the form of lawsuits, regulations, and damaging decisions. At the same time, more adherence to responsible AI may deliver tangible business benefits.

While AI initiatives are surging, responsible AI is lagging, the MIT-BCG survey reports authors, Elizabeth M. Renieris, David Kiron, and Steven Mills, report. The gap increases the possibility of failure and exposes companies to regulatory, financial, and customer satisfaction risks.

Just about everyone sees the logic in making AI more responsible 84% believe that it should be a top management priority. About half of the executives surveyed, 52%, say their companies practice some level of responsible AI. However, only 25% reported that their organization has a fully mature program the remainder say their implementations are limited in scale and scope.

Confusion and lack of consensus over the meaning of responsible AI may be a limiting factor. Only 36% of respondents believe the term is used consistently throughout their organizations, the survey finds. The surveys authors define responsible AI as a framework with principles, policies, tools, and processes to ensure that AI systems are developed and operated in the service of good for individuals and society while still achieving transformative business impact.

Other factors inhibiting responsible AI include a lack of responsible AI expertise and talent training or knowledge among staff members (54%); lack of prioritization and attention by senior leaders (53%); and a lack of funding or resourcing for responsible AI initiatives (43%).

Renieris and her co-authors identified a segment of companies that are ahead of the curve with responsible AI, which tend to apply responsible conduct not to just AI, but across their entire suites of technologies, systems, and processes. For these leading companies, responsible AI is less about a particular technology than the company itself, they state.

These leading companies are also seeing pronounced business benefits as well as a result of this attitude. Benefits realized since implementing responsible AI initiatives: better products and services (cited by 50%), enhanced brand differentiation (48%), and accelerated innovation (43%).

The following are recommendations based on the experiences of companies taking the lead with responsible AI:

Read more:
Everyone Wants Responsible Artificial Intelligence, Few Have It Yet - Forbes

The Regulation of Artificial Intelligence in Canada and Abroad: Comparing the Proposed AIDA and EU AI Act – Fasken

Laws governing technology have historically focused on the regulation of information privacy and digital communications. However, governments and regulators around the globe have increasingly turned their attention to artificial intelligence (AI) systems. As the use of AI becomes more widespread and changes how business is done across industries, there are signs that existing declarations of principles and ethical frameworks for AI may soon be followed by binding legal frameworks. [1]

On June 16, 2022, the Canadian government tabled Bill C-27, the Digital Charter Implementation Act, 2022. Bill C-27 proposes to enact, among other things, the Artificial Intelligence and Data Act (AIDA). Although there have been previous efforts to regulate automated decision-making as part of federal privacy reform efforts, AIDA is Canadas first effort to regulate AI systems outside of privacy legislation. [2]

If passed, AIDA would regulate the design, development, and use of AI systems in the private sector in connection with interprovincial and international trade, with a focus on mitigating the risks of harm and bias in the use of high-impact AI systems. AIDA sets out positive requirements for AI systems as well as monetary penalties and new criminal offences on certain unlawful or fraudulent conduct in respect of AI systems.

Prior to AIDA, in April 2021, the European Commission presented a draft legal framework for regulating AI, the Artificial Intelligence Act (EU AI Act), which was one of the first attempts to comprehensively regulate AI. The EU AI Act sets out harmonized rules for the development, marketing, and use of AI and imposes risk-based requirements for AI systems and their operators, as well as prohibitions on certain harmful AI practices.

Broadly speaking, AIDA and the EU AI Act are both focused on mitigating the risks of bias and harms caused by AI in a manner that tries to be balanced with the need to allow technological innovation. In an effort to be future-proof and keep pace with advances in AI, both AIDA and the EU AI Act define artificial intelligence in a technology-neutral manner. However, AIDA relies on a more principles-based approach, while the EU AI Act is more prescriptive in classifying high-risk AI systems and harmful AI practices and controlling their development and deployment. Further, much of the substance and details of AIDA are left to be elaborated in future regulations, including the key definition of high risk AI systems to which most of AIDAs obligations attach.

The table below sets out some of the key similarities and differences between the current drafts of AIDA and the EU AI Act.

High-risk system means:

The EU AI Act does not apply to:

AIDA does not stipulate an outright ban on AI systems presenting an unacceptable level of risk.

It does, however, make it an offence to:

The EU AI Act prohibits certain AI practices and certain types of AI systems, including:

Persons who process anonymized data for use in AI systems must establish measures (in accordance with future regulations) with respect to:

High-risk systems that use data sets for training, validation and testing must be subject to appropriate data governance and management practices that address:

Data sets must:

Transparency. Persons responsible for high-impact systems must publish on a public website a plain-language description of the AI system which explains:

Transparency. AI systems which interact with individuals and pose transparency risks, such as those that incorporate emotion recognition systems or risks of impersonation or deception, are subject to additional transparency obligations.

Regardless of whether or not the system qualifies as high-risk, individuals must be notified that they are:

Persons responsible for AI systems must keep records (in accordance with future regulations) describing:

High-risk AI systems must:

Providers of high-risk AI systems must:

The Minister of Industry may designate an official to be the Artificial Intelligence and Data Commissioner, whose role is to assist in the administration and enforcement of AIDA. The Minister may delegate any of their powers or duties under AIDA to the Commissioner.

The Minister of Industry has the following powers:

The European Artificial Intelligence Board will assist the European Commission in providing guidance and overseeing the application of the EU AI Act. Each Member State will designate or establish a national supervisory authority.

The Commission has the authority to:

Persons who commit a violation of AIDA or its regulations may be subject to administrative monetary penalties, the details of which will be establish by future regulations. Administrative monetary penalties are intended to promote compliance with AIDA.

Contraventions to AIDAs governance and transparency requirements can result in fines:

Persons who commit more serious criminal offences (e.g., contravening the prohibitions noted above or obstructing or providing false or misleading information during an audit or investigation) may be liable to:

While both acts define AI systems relatively broadly, the definition provided in AIDA is narrower. AIDA only encapsulates technologies that process data autonomously or partly autonomously, whereas the EU AI Act does not stipulate any degree of autonomy. This distinction in AIDA is arguably a welcome divergence from the EU AI Act, which as currently drafted would appear to include even relatively innocuous technology, such as the use of a statistical formula to produce an output. That said, there are indications that the EU AI Acts current definition may be modified before its final version is published, and that it will likely be accompanied by regulatory guidance for further clarity. [4]

Both acts are focused on avoiding harm, a concept they define similarly. The EU AI Act is, however, slightly broader in scope as it considers serious disruptions to critical infrastructure a harm, whereas AIDA is solely concerned with harm suffered by individuals.

Under AIDA, high-impact systems will be defined in future regulations, so it is not yet possible to compare AIDAs definition of high-impact systems to the EU AI Acts definition of high-risk systems. The EU AI Act identifies two categories of high-risk systems. The first category is AI systems intended to be used as safety components of products, or as products themselves. The second category is AI systems listed in an annex to the act and which present a risk to the health, safety, or fundamental rights of individuals. It remains to be seen how Canada would define high-impact systems, but the EU AI Act provides an indication of the direction the federal government could take.

Similarly, AIDA also defers to future regulations with respect to risk assessments, while the proposed EU AI Act sets out a graduated approach to risk in the body of the act. Under the EU AI Act, systems presenting an unacceptable level of risk are banned outright. In particular, the EU AI Act explicitly bans manipulative or exploitive systems that can cause harm, real-time biometric identification systems used in public spaces by law enforcement, and all forms of social scoring. AI systems presenting low or minimal risk are largely exempt from regulations, except for transparency requirements.

AIDA only imposes transparency requirements on high-impact AI systems, and does not stipulate an outright ban on AI systems presenting an unacceptable level of risk. It does, however, empower the Minister of Industry to order that a high-impact system presenting a serious risk of imminent harm cease being used.

AIDAs application is limited by the constraints of the federal governments jurisdiction. AIDA broadly applies to actors throughout the AI supply chain from design to delivery, but only as their activities relate to international or interprovincial trade and commerce. AIDA does not expressly apply to intra-provincial development and use of AI systems. Government institutions (as defined under the Privacy Act) are excluded from AIDAs scope, as are products, services, and activities that are under the direction or control of specified federal security agencies.

The EU AI Act specifically applies to providers (although this may be interpreted broadly) and users of AI systems, including government institutions but excluding where AI systems are exclusively developed for military purposes. The EU AI Act also expressly applies to providers and users of AI systems insofar as the output produced by those systems is used in the EU.

AIDA is largely silent on requirements with respect to data governance. In its current form, it only imposes requirements on the use of anonymized data in AI systems, most of which will be elaborated in future regulations. AIDAs data governance requirements will apply to anonymized data used in the design, development, or use of any AI system, whereas the EU AI Acts data governance requirements will apply only to high-impact systems.

The EU AI Act sets the bar very high for data governance. It requires that training, validation, and testing datasets be free of errors and complete. In response to criticisms of this standard for being too strict, the European Parliament has introduced an amendment to the act that proposes to make error-free and complete datasets an overall objective to the extent possible, rather than a precise requirement.

While AIDA and the EU AI Act both set out requirements with respect to assessment, monitoring, transparency, and data governance, the EU AI Act imposes a much heavier burden on those responsible for high-risk AI systems. For instance, under AIDA, persons responsible for such systems will be required to implement mitigation, monitoring, and transparency measures. The EU AI Act goes a step further by putting high-risk AI systems through a certification scheme, which requires that the responsible entity conduct a conformity assessment and draw up a declaration of conformity before the system is put into use.

Both acts impose record-keeping requirements. Again, the EU AI Act is more prescriptive, but contrary to AIDA, its requirements will only apply to high-risk systems, whereas AIDAs record-keeping requirements would apply to all AI systems.

Finally, both acts contain notification requirements that are limited to high-impact (AIDA) and high-risk (EU AI Act) systems. AIDA imposes a slightly heavier burden, requiring notification for all uses that are likely to result in material harm. The EU AI Act only requires notification if a serious incident or malfunction has occurred.

Both AIDA and the EU AI Act provide for the creation of a new monitoring authority to assist with administration and enforcement. The powers attributed to these entities under both acts are similar.

Both acts contemplate significant penalties for violations of their provisions. AIDAs penalties for more serious offences up to $25 million CAD or 5% of the offenders gross global revenues from the preceding financial year are significantly greater than those found in Quebecs newly revised privacy law and the EUs General Data Protection Regulation (GDPR). The EU AI Acts most severe penalty is higher than both the GDPR and AIDAs most severe penalty: up to 30 million or 6% of gross global revenues from the preceding financial year for non-compliance with prohibited AI practices or the quality requirements set out for high-risk AI systems.

In contrast to the EU AI Act, AIDA also introduces new criminal offences for the most serious offences committed under the act.

Finally, the EU AI Act would also grant discretionary power to Member States to determine additional penalties for infringements of the act.

While both AIDA and the EU AI Act have broad similarities, it is impossible to predict with certainty how similar they could eventually be, given that so much of AIDA would be elaborated in future regulations. Further, at the time of writing, Bill C-27 has only completed first reading, and is likely to be subject to amendments as it makes its way through Parliament.

It is still unclear how much influence the EU AI Act will have on AI regulations globally, including in Canada. Regulators in both Canada and the EU may aim for a certain degree of consistency. Indeed, many have likened the EU AI Act to the GDPR, in that it may set global standards for AI regulation just as the GDPR did for privacy law.

Regardless of the fates of AIDA and the EU AI Act, organizations should start considering how they plan to address a future wave of AI regulation.

For more information on the potential implications of the new Bill C-27, Digital Charter Implementation Act, 2022, please see our bulletin,The Canadian Government Undertakes a Second Effort at Comprehensive Reform to Federal Privacy Law, on this topic.

[1]There have been a number of recent developments in AI regulation, including the United Kingdoms Algorithmic Transparency Standard, Chinas draft regulations on algorithmic recommendation systems in online services, the United States Algorithmic Accountability Act of 2022, and the collaborative effort between Health Canada, the FDA and the United Kingdoms Medicines and Healthcare Products Regulatory Agency to publish Guiding Principles on Good Machine Learning Practice for Medical Device Development.

[2]In the public sphere, the Directive on Automated Decision-Makingguides the federal governments use of automated decision systems.

[3]This prohibition is subject to three exhaustively listed and narrowly defined exceptions where the use of such AI systems is strictly necessary to achieve a substantial public interest, the importance of which outweighs the risks: (1) the search for potential victims of crime, including missing children; (2) certain threats to the life or physical safety of individuals or a terrorist attack; and (3) the detection, localization, identification or prosecution of perpetrators or suspects of certain particularly reprehensible criminal offences.

[4]As an indication of potential changes, the Slovenian Presidency of the Council of the European Union tabled a proposed amendment to the act in November 2021 that would effectively narrow the scope of the regulation to machine learning.

Follow this link:
The Regulation of Artificial Intelligence in Canada and Abroad: Comparing the Proposed AIDA and EU AI Act - Fasken