Archive for the ‘Artificial Intelligence’ Category

The Computers Are Getting Better at Writing, Thanks to Artificial Intelligence – The New Yorker

At first, I was confused by this continuation from the machine. For one thing, Englander doesnt write with sentence fragments, but, upon rereading, the content seemed Englander-esque to me. Its a shocking and terrifying leap, he said, when I showed it to him. Yes, its off. But not in the sense that a computer wrote it but in the sense that someone just starting to write fiction wrote itsloppy but well-meaning. Its like it has the spark of life to it but just needs to sit down and focus and put the hours in. Although Englander doesnt feel the passage is something he would write, he doesnt hate it, either. It was like the work of someone aspiring to write, he said. Like maybe a well-meaning pre-med student or business student fulfilling a writing requirement because they have tothe work is there, but maybe without some of the hunger. But it definitely feels teachable. Id totally sit down and have a cup of coffee with the machine. You know, to talk things out.

Friendliness will not be the typical reaction, I fear. The first reaction to this technology will be dismissalthat the technology isnt really doing anything much at all, that it isnt writing, that its just a toy. The second reaction will be uneasethat the technology is doing too much, that it is writing, that it will replace the human. GPT-3 is a tool. It does not think or feel. It performs instructions in language. The OpenAI people imagine it for generating news articles, translation, answering questions. But these are the businessmans pedantic and vaguely optimistic approaches to the worlds language needs.

For those who choose to use artificial intelligence, it will alter the task of writing. The writers job becomes as an editor almost, Gupta said. Your role starts to become deciding whats good and executing on your taste, not as much the low-level work of pumping out word by word by word. Youre still editing lines and copy and making those words beautiful, but, as you move up in that chain, and youre executing your taste, you have the potential to do a lot more. The artist wants to do something with language. The machines will enact it. The intention will be the art, the craft of language an afterthought.

For writers who dont like writingwhich, in my experience, is nearly all of usSudowrite may well be a salvation. Just pop in what you have, whatever scraps of notes, and let the machine give you options. There are other, more obvious applications. Sudowrite was relatively effective when I asked it to continue Charles Dickenss unfinished novel The Mystery of Edwin Drood. I assume it will be used by publishers to complete unfinished works like Jane Austens Sanditon or P.G. Wodehouses Sunset at Blandings. With a competent technician and an editor-writer you could compose them now, rapidly, with the technology thats available. There must be a market for a new Austen or Wodehouse. I could do either in a weekend. (Other writers have already tried to write like Austen and Wodehouse, but even excellent examples always feel like contemporary versions of their works. If you used a Wodehouse machine or an Austen machine, it would sound like they sound. The future would not have happened to the algorithm.)

Gupta knows that Sudowrite is only beginning to sense, dimly, the possibilities of GPT-3, never mind the possibilities of artificial intelligence in natural language. GPT-3 is perhaps the Model A of this technology. The above is a small taste of what can be done at a hundred and seventy-five billion parameters. What happens at a trillion? What happens at ten trillion? The human brain has about a hundred trillion parameters. What happens when the technology passes that number? Its early days, Gupta said. I see a future where it gets super more sophisticated and it helps you realize ideas that you couldnt realize easily on your own.

The creative possibilities are exciting and terrifying. Englander didnt really see the machine as a competitor to himself but almost as a student that he could coax out of weakness. If it was desperately trying to tell me something about dreams, it would have the juice, Englander told me. But this was more like the work of somebody who admires writing but doesnt need to write. Someonea living, breathing someonetrying to sound like they think a writer should.

Already, what GPT-3 shows is that literary style is an algorithm, a complex series of instructions. The reason a passage from Kafka or Coleridge or Englander doesnt look like math is because the mind isnt capable of holding a hundred and seventy-five billion parameters at the same time. Very soon, when you read a text you will not be able to assume a person intended or wrote that language. Eventually, this technology will escape the confines of the scientific realm, to those who use language to manipulate and to control, and perhaps even those who use language to express and celebrate.

Meanwhile, there are other avenues of meaning to explore. If you put in the whole of The Metamorphosis into Sudowrite, it will compose an extension:

When Mr. Samsa thought about moving house and how they could all pull their way up again, he couldnt suppress a shiver running down his spineand this shiver ran down the spine of all three of them, he could see, as if they were collectively possessed and trembling all over with some sort of fever. It was more like a convulsion than a shiver. Then they calmed down, quite exhausted. Mr. Samsa thought: If we continue like this well soon be having a charwoman in our family. First thing tomorrow Ill make inquiries about suitable flats. And then he looked up: the two women were gazing ahead and did not see that he was looking at them. All of a sudden Grete burst out laughing. Whats that for?, asked her father. Well, just because, replied Grete. And Mr. Samsa did not insist.

But The Metamorphosis doesnt need extension. Its perfect. It has survived because the core meaning of its story continues to resonate. Gregor is subject to a miracle that is at once a revelation and a catastrophe. The human entity changes once again, in a way that is both magical and degrading.

An earlier version of this post misidentified the name of a function in Sudowrite as well as its proposed cost.

Go here to read the rest:
The Computers Are Getting Better at Writing, Thanks to Artificial Intelligence - The New Yorker

Artificial intelligence is learning how to dodge space junk in orbit – Space.com

An AI-driven space debris-dodging system could soon replace expert teams dealing with growing numbers of orbital collision threats in the increasingly cluttered near-Earth environment.

Every two weeks, spacecraft controllers at the European Space Operations Centre (ESOC) in Darmstadt, Germany, have to conduct avoidance manoeuvres with one of their 20 low Earth orbit satellites, Holger Krag, the Head of Space Safety at the European Space Agency (ESA) said in a news conference organized by ESA during the 8th European Space Debris Conference held virtually from Darmstadt Germany, April 20 to 23. There are at least five times as many close encounters that the agency's teams monitor and carefully evaluate, each requesting a multi-disciplinary team to be on call 24/7 for several days.

"Every collision avoidance manoeuvre is a nuisance," Krag said. "Not only because of fuel consumption but also because of the preparation that goes into it. We have to book ground-station passes, which costs money, sometimes we even have to switch off the acquisition of scientific data. We have to have an expert team available round the clock."

The frequency of such situations is only expected to increase. Not all collision alerts are caused by pieces of space debris. Companies such as SpaceX, OneWeb and Amazon are building megaconstellations of thousands of satellites, lofting more spacecraft into orbit in a single month than used to be launched within an entire year only a few years ago. This increased space traffic is causing concerns among space debris experts. In fact, ESA said that nearly half of the conjunction alerts currently monitored by the agency's operators involve small satellites and constellation spacecraft.

ESA, therefore, asked the global Artificial Intelligence community to help develop a system that would take care of space debris dodging autonomously or at least reduce the burden on the expert teams.

"We made a large historic data set of past conjunction warnings available to a global expert community and tasked them to use AI [Artificial Intelligence] to predict the evolution of a collision risk of each alert over the three days following the alert," Rolf Densing, Director of ESA Operations said in the news conference.

"The results are not yet perfect, but in many cases, AI was able to replicate the decision process and correctly identify in which cases we had to conduct the collision avoidance manoeuvre."

Related: Astronomers ask UN committee to protect night skies from megaconstellations

The agency will explore newer approaches to AI development, such as deep learning and neural networks, to improve the accuracy of the algorithms, Tim Flohrer, the Head of ESA's Space Debris Office told Space.com.

"The standard AI algorithms are trained on huge data sets," Flohrer said. "But the cases when we had actually conducted manoeuvres are not so many in AI terms. In the next phase we will look more closely into specialised AI approaches that can work with smaller data sets."

For now, the AI algorithms can aid the ground-based teams as they evaluate and monitor each conjunction alert, the warning that one of their satellites might be on a collision course with another orbiting body. According to Flohrer, such AI-assistance will help reduce the number of experts involved and help the agency deal with the increased space traffic expected in the near future. The decision whether to conduct an avoidance manoeuvre or not for now still has to be taken by a human operator.

"So far, we have automated everything that would require an expert brain to be awake 24/7 to respond to and follow up the collision alerts," said Krag. "Making the ultimate decision whether to conduct the avoidance manoeuvre or not is the most complex part to be automated and we hope to find a solution to this problem within the next few years."

Ultimately, Densing added, the global community should work together to create a collision avoidance system similar to modern air-traffic management, which would work completely autonomously without the humans on the ground having to communicate.

"In air traffic, they are a step further," Densing said. "Collision avoidance manoeuvres between planes are decentralised and take place automatically. We are not there yet, and it will likely take a bit more international coordination and discussions."

Not only are scientific satellites at risk of orbital collisions, but spacecraft like SpaceX's Crew Dragon could be affected as well. Recently, Crew Dragon Endeavour, with four astronauts on board, reportedly came dangerously close to a small piece of debris on Saturday, April 24, during its cruise to the International Space Station. The collision alert forced the spacefarers to interrupt their leisure time, climb back into their space suits and buckle up in their seats to brace for a possible impact.

According to ESA, about 11,370 satellites have been launched since 1957, when the Soviet Union successfully orbited a beeping ball called Sputnik. About 6,900 of these satellites remain in orbit, but only 4,000 are still functioning.

Follow Tereza Pultarova on Twitter @TerezaPultarova. Follow us on Twitter @Spacedotcom and on Facebook.

See the original post here:
Artificial intelligence is learning how to dodge space junk in orbit - Space.com

Arize AI Named to Forbes AI 50 List of Most Promising Artificial Intelligence Companies of 2021 – PRNewswire

BERKELEY, Calif., April 30, 2021 /PRNewswire/ --Arize AI, the leading Machine Learning (ML) Observability company, has been named to the Forbes AI 50, a list of the top private companies using artificial intelligence to transform industries.

The Forbes AI 50 list, in its third year, includes a list of private North American companies using artificial intelligence in ways that are fundamental to their operations, such as machine learning, natural language processing, and computer vision.

Today, companies spend millions of dollars developing and implementing ML models, only to see a myriad of unexpected performance degradation issues arise. Models that don't perform after the code is shipped are painful to troubleshoot and negatively impact business operations and results.

"Arize AI is squarely focused on the last mile of AI: models that are in production and making decisions that can cost businesses millions of dollars a day," said Jason Lopatecki, co-founder and CEO of Arize. "We are excited that the AI 50 panel recognizes the importance of software that can watch, troubleshoot, explain and provide guardrails on AI, as it is deployed into the real world, and views Arize AI as a leader in this category."

In partnership with Sequoia Capital and Meritech Capital, Forbes evaluated hundreds of submissions from the U.S. and Canada. A panel of expert AI judges then reviewed the finalists to hand-pick the 50 most compelling companies.

About Arize AI Arize AI was founded by leaders in the Machine Learning (ML) Infrastructure and analytics space to bring better visibility and performance management over AI. Arize AI built the first ML Observability platform to help make machine learning models work in production. As models move from research to the real world, we provide a real-time platform to monitor, explain and troubleshoot model/data issues.

Media Contact: Krystal Kirkland [emailprotected]

SOURCE Arize AI

http://www.arize.com

More:
Arize AI Named to Forbes AI 50 List of Most Promising Artificial Intelligence Companies of 2021 - PRNewswire

Europe Seeks to Tame Artificial Intelligence with the Worlds First Comprehensive Regulation – JD Supra

In what could be a harbinger of the future regulation of artificial intelligence (AI) in the United States, the European Commission published its recent proposal for regulation of AI systems. The proposal is part of the European Commissions larger European strategy for data, which seeks to defend and promote European values and rights in how we design, make and deploy technology in the economy. To this end, the proposed regulation attempts to address the potential risks that AI systems pose to the health, safety, and fundamental rights of Europeans caused by AI systems.

Under the proposed regulation, AI systems presenting the least risk would be subject to minimal disclosure requirements, while at the other end of the spectrum AI systems that exploit human vulnerabilities and government-administered biometric surveillance systems are prohibited outright except under certain circumstances. In the middle, high-risk AI systems would be subject to detailed compliance reviews. In many cases, such high-risk AI system reviews will be in addition to regulatory reviews that apply under existing EU product regulations (e.g., the EU already requires reviews of the safety and marketing of toys and radio frequency devices such as smart phones, Internet of Things devices, and radios).

Applicability

The proposed AI regulation applies to all providers that market in the EU or put AI systems into service in the EU as well as users of AI systems in the EU. This scope includes governmental authorities located in the EU. The proposed regulation also applies to providers and users of AI systems whose output is used within the EU, even if the producer or user is located outside of the EU. If the proposed AI regulation becomes law, the enterprises that would be most significantly affected by the regulation are those that provide high-risk AI systems not currently subject to detailed compliance reviews under existing EU product regulations, but that would be under the AI regulation.

Scope of AI Covered by the AI Regulation

The term AI system is defined broadly as software that uses any of several identified approaches to generate outputs for a set of human-defined objectives. These approaches cover far more than artificial neural networks and other technologies currently viewed by many as traditional as AI. In fact, the identified approaches cover many types of software that few would likely consider AI, such as statistical approaches and search and optimization methods. Under this definition, the AI regulation would seemingly cover the day-to-day tools of nearly every e-commerce platform, social media platform, advertiser, and other business that rely on such commonplace tools to operate.

This apparent breadth can be assessed in two ways. First, this definition may be intended as a placeholder that will be further refined after the public release. There is undoubtedly no perfect definition for AI system, and by releasing the AI regulation in its current form, lawmakers and interested parties can alter the scope of the definition following public commentary and additional analysis. Second, most AI systems inadvertently caught in the net of this broad definition would likely not fall into the high-risk category of AI systems. In other words, these systems generally do not negatively affect the health and safety or fundamental rights of Europeans, and would only be subject to disclosure obligations similar to the data privacy regulations already applicable to most such systems.

Prohibited AI Systems

The proposed regulation prohibits uses of AI systems for purposes that the EU considers to be unjustifiably harmful. Several categories are directed at private sector actors, including prohibitions on the use of so-called dark patterns through subliminal techniques beyond a persons consciousness, or the exploitation of age, physical or mental vulnerabilities to manipulate behavior that causes physical or psychological harm.

The remaining two areas of prohibition are focused primarily on governmental actions. First, the proposed regulation would prohibit use of AI systems by public authorities to develop social credit systems for determining a persons trustworthiness. Notably, this prohibition has carveouts, as such systems are only prohibited if they result in a detrimental or unfavorable treatment, and even then only if unjustified, disproportionate, or disconnected from the content of the data gathered. Second, indiscriminate surveillance practices by law enforcement that use biometric identification are prohibited in public spaces except in certain exigent circumstances, and with appropriate safeguards on use. These restrictions reflect the EUs larger concerns regarding government overreach in the tracking of its citizens. Military uses are outside the scope of the AI regulation, so this prohibition is essentially limited to law enforcement and civilian government actors.

High-Risk AI Systems

High-risk AI systems receive the most attention in the AI regulation. These are systems that, according to the memorandum accompanying the regulation, pose a significant risk to the health and safety or fundamental rights of persons. This boils down to AI systems that (1) are a regulated product or are used as a safety component for a regulated product like toys, radio equipment, machinery, elevators, automobiles, and aviation, or (2) fall into one of several categories: biometric identification, management of critical infrastructure, education and training, human resources and access to employment, law enforcement, administration of justice and democratic processes, migration and border control management, and systems for determining access to public benefits. The regulation contemplates this latter category evolving over time to include other products and services, some of which may face little product regulation at present. Enterprises that provide these products may be venturing into an unfamiliar and evolving regulatory space.

High-risk AI systems would be subject to extensive requirements, necessitating companies to develop new compliance and monitoring procedures, as well as make changes to products both on the front end and the back end such as:

Transparency Requirements

The regulation would impose transparency and disclosure requirements for certain AI systems regardless of risk. Any AI system that interacts with humans must include disclosures to the user they are interacting with an AI system. The AI regulation provides no further details on this requirement, so a simple notice that an AI system is being used would presumably satisfy this regulation. Most AI systems (as defined in the regulation) would fall outside of the prohibited and high-risk categories, and so would only be subject to this disclosure obligation. For that reason, while the broad definition of AI system captures much more than traditional artificial intelligence techniques, most enterprises will feel minimal impact from being subject to these regulations.

Penalties

The proposed regulation provides for tiered penalties depending on the nature of the violation. Prohibited uses of AI systems (subliminal manipulation, exploitation of vulnerabilities, and development of social credit systems) and prohibited development, testing, and data use practices could result in fines of the higher of either 30,000,000 EUR or 6% of a companys total worldwide annual revenue. Violation of any other requirements or obligations of the proposed regulation could result in fines of the higher of either 20,000,000 EUR or 4% of a companys total worldwide annual revenue. Supplying incorrect, incomplete, or misleading information to certification bodies or national authorities could result in fines of the higher of either 10,000,000 EUR or 2% of a companys total worldwide annual revenue.

Notably, EU government institutions are also subject to fines, with penalties up to 500,000 EUR for engaging in prohibited practices that would result in the highest fines had the violation been committed by a private actor, and fines for all other violations up to 250,000 EUR.

Prospects for Becoming Law

The proposed regulation remains subject to amendment and approval by the European Parliament and potentially the European Council, a process which can take several years. During this long legislative journey, components of the regulation could change significantly, and it may not even become law.

Key Takeaways for U.S. Companies Developing AI Systems

Compliance With Current Laws

Although the proposed AI regulation would mark the most comprehensive regulation of AI to date, stakeholders should be mindful that current U.S. and EU laws already govern some of the conduct it attributes to AI systems. For example, U.S. federal law prohibits unlawful discrimination on the basis of a protected class in numerous scenarios, such as in employment, the provision of public accommodations, and medical treatment. Uses of AI systems that result in unlawful discrimination in these arenas already pose significant legal risk. Similarly, AI systems that affect public safety or are used in an unfair or deceptive manner could be regulated through existing consumer protection laws.

Apart from such generally applicable laws, U.S. laws regulating AI are limited in scope, and focus on disclosures related to AI systems interacting with people or are limited to providing guidance under current law in an industry-specific manner, such as with autonomous vehicles. There is also a movement towards enhanced transparency and disclosure obligations for users when their personal data is processed by AI systems, as discussed further below.

Implications for Laws in the United States

To date, no state or federal laws specifically targeting AI systems have been successfully enacted into law. If the proposed EU AI regulation becomes law, it will undoubtedly influence the development of AI laws in Congress and state legislatures, and potentially globally. This is a trend we saw with the EUs General Data Protection Regulation (GDPR), which has shaped new data privacy laws in California, Virginia, Washington, and several bills before Congress, as well as laws in other countries.

U.S. legislators have so far proposed bills that would regulate AI systems in a specific manner, rather than comprehensively as the EU AI regulation purports to do. In the United States, algorithmic accountability legislation attempts to address concerns about high-risk AI systems similar to those articulated in the EU through self-administered impact assessments and required disclosures, but lacks the EU proposals outright prohibition on certain uses of AI systems, and nuanced analysis of AI systems used by government actors. Other bills would solely regulate government procurement and use of AI systems, for example, California AB-13 and Washington SB-5116, leaving industry free to develop AI systems for private, nongovernmental use. Upcoming privacy laws such as the California Privacy Rights Act (CPRA) and the Virginia Consumer Data Protection Act (CDPA), both effective January 1, 2023, do not attempt to comprehensively regulate AI, instead focusing on disclosure requirements and data subject rights related to profiling and automated decision-making.

Conclusion

Ultimately, the AI regulation (in its current form) will have minimal impact on many enterprises unless they are developing systems in the high-risk category that are not currently regulated products. But some stakeholders may be surprised, and unsatisfied with, the fact that the draft legislation puts relatively few additional restrictions on purely private sector AI systems that are not already subject to regulation. The drafters presumably did so to not overly burden private sector activities. But it is yet to be seen whether any enacted form of the AI regulation would strike that balance in the same way.

[View source.]

See the rest here:
Europe Seeks to Tame Artificial Intelligence with the Worlds First Comprehensive Regulation - JD Supra

Use Of Artificial Intelligence Attracts Legislative And Regulatory Attention In The EU, US, And Israel – Technology – Worldwide – Mondaq News Alerts

30 April 2021

Pearl Cohen Zedek Latzer Baratz

To print this article, all you need is to be registered or login on Mondaq.com.

The European Commission is proposing new legislative rules aimedto promote excellence and trust in the field of ArtificialIntelligence (AI). The new proposal of EU regulation lays down: (a)harmonized rules for the use of artificial intelligence systems inthe EU; (b) prohibitions of certain particularly harmful AIpractices; (c) specific requirements for high-risk AI systems andobligations for operators of such systems; (d) harmonizedtransparency rules for AI systems intended to interact withindividuals, such as emotion recognition systems, biometriccategorization systems, and AI systems used to generate ormanipulate image, audio or video content; and (e) rules on marketmonitoring and surveillance.

The proposal's declared purpose is to lay down a balancedand proportionate regulatory approach between the minimalrequirements to address the risks and problems linked to AI,without unduly constraining or hindering technological developmentor otherwise disproportionately increasing the cost of placing AIsolutions on the market.

Meanwhile, in the United States, the Federal Trade Commissionhas offered business guidance on AI and algorithms, and howcompanies can manage the consumer protection risks of AI andalgorithms. The FTC emphasizes that the use of AI tools should betransparent, explainable, fair, and empirically sound whilefostering accountability. The FTC says that the use of AItechnology to make predictions, recommendations, or decisions hasgreat potential to improve welfare and productivity. However, italso presents risks, such as the potential for unfair ordiscriminatory outcomes or the perpetuation of existingsocioeconomic disparities.

In Israel, the Innovation Authority and the Ministry of Justicehave published a call for public comments and proposals onregulatory restraints and possible regulation in the field of AI,with an emphasis on experimenting and implementing AI systems, suchas decision support systems with or without the involvementof human judgment.

The call seeks feedback from the general public on questionssuch as the nature of desirable AI regulation consideringIsrael's leading position as an R&D hub in the AI field;global regulatory models aimed to advance the AI field; andregulatory gaps between Israel and other countries. Comments can besubmitted by email until May 13, 2021.

CLICK HEREto read the EuropeanCommission's proposed regulation.

CLICK HEREto read the recent FTC guidefor use of AI and algorithms.

CLICK HEREto read the Israeli AI call forpublic comments (in Hebrew).

The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances.

POPULAR ARTICLES ON: Technology from Worldwide

Sheppard Mullin Richter & Hampton

Utah recently amended its breach notice law to provide certain defenses to companies who suffer a data breach.

Continue reading here:
Use Of Artificial Intelligence Attracts Legislative And Regulatory Attention In The EU, US, And Israel - Technology - Worldwide - Mondaq News Alerts