Archive for the ‘Artificial General Intelligence’ Category

Threats by artificial intelligence to human health and human existence – BMJ

Summary box

The development of artificial intelligence is progressing rapidly with many potential beneficial uses in healthcare. However, AI also has the potential to produce negative health impacts. Most of the health literature on AI is biased towards its potential benefits, and discussions about its potential harms tend to be focused on the misapplication of AI in clinical settings.

We identify how artificial intelligence could harm human health via its impacts on the social and upstream determinants of health through: the control and manipulation of people, use of lethal autonomous weapons and the effects on work and employment. We then highlight how self-improving artificial general intelligence could threaten humanity itself.

Effective regulation of the development and use of artificial intelligence is needed to avoid harm. Until such effective regulation is in place, a moratorium on the development of self-improving artificial general intelligence should be instituted.

Artificial intelligence (AI) is broadly defined as a machine with the ability to perform tasks such as being able to compute, analyse, reason, learn and discover meaning.1 Its development and application are rapidly advancing in terms of both narrow AI where only a limited and focused set of tasks are conducted2 and broad or broader AI where multiple functions and different tasks are performed.3

AI holds the potential to revolutionise healthcare by improving diagnostics, helping develop new treatments, supporting providers and extending healthcare beyond the health facility and to more people.47 These beneficial impacts stem from technological applications such as language processing, decision support tools, image recognition, big data analytics, robotics and more.810 There are similar applications of AI in other sectors with the potential to benefit society.

However, as with all technologies, AI can be applied in ways that are detrimental. The risks associated with medicine and healthcare include the potential for AI errors to cause patient harm,11 12 issues with data privacy and security1315 and the use of AI in ways that will worsen social and health inequalities by either incorporating existing human biases and patterns of discrimination into automated algorithms or by deploying AI in ways that reinforce social inequalities in access to healthcare.16 One example of harm accentuated by incomplete or biased data was the development of an AI-driven pulse oximeter that overestimated blood oxygen levels in patients with darker skin, resulting in the undertreatment of their hypoxia.17 Facial recognition systems have also been shown to be more likely to misclassify gender in subjects who are darker-skinned.18 It has also been shown that populations who are subject to discrimination are under-represented in datasets underlying AI solutions and may thus be denied the full benefits of AI in healthcare.16 19 20

Although there is some acknowledgement of the risks and potential harms associated with the application of AI in medicine and healthcare,1116 20 there is still little discussion within the health community about the broader and more upstream social, political, economic and security-related threats posed by AI. With the exception of some voices,9 10 the existing health literature examining the risks posed by AI focuses on those associated with the narrow application of AI in the health sector.1116 20 This paper seeks to help fill this gap. It describes three threats associated with the potential misuse of narrow AI, before examining the potential existential threat of self-improving general-purpose AI, or artificial general intelligence (AGI) (figure 1). It then calls on the medical and public health community to deepen its understanding about the emerging power and transformational potential of AI and to involve itself in current policy debates on how the risks and threats of AI can be mitigated without losing the potential rewards and benefits of AI.

Threats posed by the potential misuse of artificial intelligence (AI) to human health and well-being, and existential-level threats to humanity posed by self-improving artificial general intelligence (AGI).

In this section, we describe three sets of threats associated with the misuse of AI, whether it be deliberate, negligent, accidental or because of a failure to anticipate and prepare to adapt to the transformational impacts of AI on society.

The first set of threats comes from the ability of AI to rapidly clean, organise and analyse massive data sets consisting of personal data, including images collected by the increasingly ubiquitous presence of cameras, and to develop highly personalised and targeted marketing and information campaigns as well as greatly expanded systems of surveillance. This ability of AI can be put to good use, for example, improving our access to information or countering acts of terrorism. But it can also be misused with grave consequences.

The use of this power to generate commercial revenue for social media platforms, for example, has contributed to the rise in polarisation and extremist views observed in many parts of the world.21 It has also been harnessed by other commercial actors to create a vast and powerful personalised marketing infrastructure capable of manipulating consumer behaviour. Experimental evidence has shown how AI used at scale on social media platforms provides a potent tool for political candidates to manipulate their way into power.22 23 and it has indeed been used to manipulate political opinion and voter behaviour.2426 Cases of AI-driven subversion of elections include the 2013 and 2017 Kenyan elections,27 the 2016 US presidential election and the 2017 French presidential election.28 29

When combined with the rapidly improving ability to distort or misrepresent reality with deepfakes, AI-driven information systems may further undermine democracy by causing a general breakdown in trust or by driving social division and conflict,2628 with ensuing public health impacts.

AI-driven surveillance may also be used by governments and other powerful actors to control and oppress people more directly. This is perhaps best illustrated by Chinas Social Credit System, which combines facial recognition software and analysis of big data repositories of peoples financial transactions, movements, police records and social relationships to produce assessments of individual behaviour and trustworthiness, which results in the automatic sanction of individuals deemed to have behaved poorly.30 31 Sanctions include fines, denying people access to services such as banking and insurance services, or preventing them from being able to travel or send their children to fee-paying schools. This type of AI application may also exacerbate social and health inequalities and lock people into their existing socioeconomic strata. But China is not alone in the development of AI surveillance. At least 75 countries, ranging from liberal democracies to military regimes, have been expanding such systems.32 Although democracy and rights to privacy and liberty may be eroded or denied without AI, the power of AI makes it easier for authoritarian or totalitarian regimes to be either established or solidified and also for such regimes to be able to target particular individuals or groups in society for persecution and oppression.30 33

The second set of threats concerns the development of Lethal Autonomous Weapon Systems (LAWS). There are many applications of AI in military and defence systems, some of which may be used to promote security and peace. But the risks and threats associated with LAWS outweigh any putative benefits.

Weapons are autonomous in so far as they can locate, select and engage human targets without human supervision.34 This dehumanisation of lethal force is said to constitute the third revolution in warfare, following the first and second revolutions of gunpowder and nuclear arms.3436 Lethal autonomous weapons come in different sizes and forms. But crucially, they include weapons and explosives, that may be attached to small, mobile and agile devices (eg, quadcopter drones) with the intelligence and ability to self-pilot and capable of perceiving and navigating their environment. Moreover, such weapons could be cheaply mass-produced and relatively easily set up to kill at an industrial scale.36 37 For example, it is possible for a million tiny drones equipped with explosives, visual recognition capacity and autonomous navigational ability to be contained within a regular shipping container and programmed to kill en masse without human supervision.36

As with chemical, biological and nuclear weapons, LAWS present humanity with a new weapon of mass destruction, one that is relatively cheap and that also has the potential to be selective about who or what is targeted. This has deep implications for the future conduct of armed conflict as well as for international, national and personal security more generally. Debates have been taking place in various forums on how to prevent the proliferation of LAWS, and about whether such systems can ever be kept safe from cyber-infiltration or from accidental or deliberate misuse.3436

The third set of threats arises from the loss of jobs that will accompany the widespread deployment of AI technology. Projections of the speed and scale of job losses due to AI-driven automation range from tens to hundreds of millions over the coming decade.38 Much will depend on the speed of development of AI, robotics and other relevant technologies, as well as policy decisions made by governments and society. However, in a survey of most-cited authors on AI in 2012/2013, participants predicted the full automation of human labour shortly after the end of this century.39 It is already anticipated that in this decade, AI-driven automation will disproportionately impact low/middle-income countries by replacing lower-skilled jobs,40 and then continue up the skill-ladder, replacing larger and larger segments of the global workforce, including in high-income countries.

While there would be many benefits from ending work that is repetitive, dangerous and unpleasant, we already know that unemployment is strongly associated with adverse health outcomes and behaviour, including harmful consumption of alcohol4144 and illicit drugs,43 44 being overweight,43 and having lower self-rated quality of life41 45 and health46 and higher levels of depression44 and risk of suicide.41 47 However, an optimistic vision of a future where human workers are largely replaced by AI-enhanced automation would include a world in which improved productivity would lift everyone out of poverty and end the need for toil and labour. However, the amount of exploitation our planet can sustain for economic production is limited, and there is no guarantee that any of the added productivity from AI would be distributed fairly across society. Thus far, increasing automation has tended to shift income and wealth from labour to the owners of capital, and appears to contribute to the increasing degree of maldistribution of wealth across the globe.4851 Furthermore, we do not know how society will respond psychologically and emotionally to a world where work is unavailable or unnecessary, nor are we thinking much about the policies and strategies that would be needed to break the association between unemployment and ill health.

Self-improving general-purpose AI, or AGI, is a theoretical machine that can learn and perform the full range of tasks that humans can.52 53 By being able to learn and recursively improve its own code, it could improve its capacity to improve itself and could theoretically learn to bypass any constraints in its code and start developing its own purposes, or alternatively it could be equipped with this capacity from the beginning by humans.54 55

The vision of a conscious, intelligent and purposeful machine able to perform the full range of tasks that humans can has been the subject of academic and science fiction writing for decades. But regardless of whether conscious or not, or purposeful or not, a self-improving or self-learning general purpose machine with superior intelligence and performance across multiple dimensions would have serious impacts on humans.

We are now seeking to create machines that are vastly more intelligent and powerful than ourselves. The potential for such machines to apply this intelligence and powerwhether deliberately or notin ways that could harm or subjugate humansis real and has to be considered. If realised, the connection of AGI to the internet and the real world, including via vehicles, robots, weapons and all the digital systems that increasingly run our societies, could well represent the biggest event in human history.53 Although the effects and outcome of AGI cannot be known with any certainty, multiple scenarios may be envisioned. These include scenarios where AGI, despite its superior intelligence and power, remains under human control and is used to benefit humanity. Alternatively, we might see AGI operating independently of humans and coexisting with humans in a benign way. Logically however, there are scenarios where AGI could present a threat to humans, and possibly an existential threat, by intentionally or unintentionally causing harm directly or indirectly, by attacking or subjugating humans or by disrupting the systems or using up resources we depend on.56 57 A survey of AI society members predicted a 50% likelihood of AGI being developed between 2040 and 2065, with 18% of participants believing that the development of AGI would be existentially catastrophic.58 Presently, dozens of institutions are conducting research and development into AGI.59

Many of the threats described above arise from the deliberate, accidental or careless misuse of AI by humans. Even the risk and threat posed by a form of AGI that exists and operates independently of human control is currently still in the hands of humans. However, there are differing opinions about the degree of risk posed by AI and about the relative trade-offs between risk and potential reward, and harms and benefits.

Nonetheless, with exponential growth in AI research and development,60 61 the window of opportunity to avoid serious and potentially existential harms is closing. The future outcomes of the development of AI and AGI will depend on policy decisions taken now and on the effectiveness of regulatory institutions that we design to minimise risk and harm and maximise benefit. Crucially, as with other technologies, preventing or minimising the threats posed by AI will require international agreement and cooperation, and the avoidance of a mutually destructive AI arms race. It will also require decision making that is free of conflicts of interest and protected from the lobbying of powerful actors with a vested interest. Worryingly, large private corporations with vested financial interests and little in the way of democratic and public oversight are leading in the field of AGI research.59

Different parts of the UN system are now engaged in a desperate effort to ensure that our international social, political and legal institutions catch up with the rapid technological advancements being made with AI. In 2020, for example, the UN established a High-level Panel on Digital Cooperation to foster global dialogue and cooperative approaches for a safe and inclusive digital future.62 In September 2021, the head of the UN Office of the Commissioner of Human Rights called on all states to place a moratorium on the sale and use of AI systems until adequate safeguards are put in place to avoid the negative, even catastrophic risks posed by them.63 And in November 2021, the 193 member states of UNESCO adopted an agreement to guide the construction of the necessary legal infrastructure to ensure the ethical development of AI.64 However, the UN still lacks a legally binding instrument to regulate AI and ensure accountability at the global level.

At the regional level, the European Union has an Artificial Intelligence Act65 which classifies AI systems into three categories: unacceptable-risk, high-risk and limited and minimal-risk. This Act could serve as a stepping stone towards a global treaty although it still falls short of the requirements needed to protect several fundamental human rights and to prevent AI from being used in ways that would aggravate existing inequities and discrimination.

There have also been efforts focused on LAWS, with an increasing number of voices calling for stricter regulation or outright prohibition, just as we have done with biological, chemical and nuclear weapons. State parties to the UN Convention on Certain Conventional Weapons have been discussing lethal autonomous weapon systems since 2014, but progress has been slow.66

What can and should the medical and public health community do? Perhaps the most important thing is to simply raise the alarm about the risks and threats posed by AI, and to make the argument that speed and seriousness are essential if we are to avoid the various harmful and potentially catastrophic consequences of AI-enhanced technologies being developed and used without adequate safeguards and regulation. Importantly, the health community is familiar with the precautionary principle67 and has demonstrated its ability to shape public and political opinion about existential threats in the past. For example, the International Physicians for the Prevention of Nuclear War were awarded the Nobel Peace Prize in 1985 because it assembled principled, authoritative and evidence-based arguments about the threats of nuclear war. We must do the same with AI, even as parts of our community espouse the benefits of AI in the fields of healthcare and medicine.

It is also important that we not only target our concerns at AI, but also at the actors who are driving the development of AI too quickly or too recklessly, and at those who seek only to deploy AI for self-interest or malign purposes. If AI is to ever fulfil its promise to benefit humanity and society, we must protect democracy, strengthen our public-interest institutions, and dilute power so that there are effective checks and balances. This includes ensuring transparency and accountability of the parts of the militarycorporate industrial complex driving AI developments and the social media companies that are enabling AI-driven, targeted misinformation to undermine our democratic institutions and rights to privacy.

Finally, given that the world of work and employment will drastically change over the coming decades, we should deploy our clinical and public health expertise in evidence-based advocacy for a fundamental and radical rethink of social and economic policy to enable future generations to thrive in a world in which human labour is no longer a central or necessary component to the production of goods and services.

There are no data in this work.

Not applicable.

Not applicable.

The authors would like to thank Dr Ira Helfand and Dr Chhavi Chauhan for their valuable comments on earlier versions of the manuscript.

View original post here:

Threats by artificial intelligence to human health and human existence - BMJ

The apocalypse isnt coming. We must resist cynicism and fear about AI – The Guardian

Opinion

Remember when WeWork would kill commercial real estate? Crypto would abolish banks? The metaverse would end meeting people in real life?

Mon 15 May 2023 04.06 EDT

In the field of artificial intelligence, doomerism is as natural as an echo. Every development in the field, or to be more precise every development that the public notices, immediately generates an apocalyptic reaction. The fear is natural enough; it comes partly from the lizard-brain part of us that resists whatever is new and strange, and partly from the movies, which have instructed us, for a century, that artificial intelligence will take the form of an angry god that wants to destroy all humanity.

The recent public letter calling for a six-month ban on AI lab work will not have the slightest measurable effect on the development of artificial intelligence, it goes without saying. But it has changed the conversation: every discussion about artificial intelligence must begin with the possibility of total human extinction. Its silly and, worse, its an alibi, a distraction from the real dangers technology presents.

The most important thing to remember about tech doomerism in general is that its a form of advertising, a species of hype. Remember when WeWork was going to end commercial real estate? Remember when crypto was going to lead to the abolition of central banks? Remember when the metaverse was going to end meeting people in real life? Silicon Valley uses apocalypse for marketing purposes: they tell you their tech is going to end the world to show you how important they are.

I have been working with and reporting on AI since 2017, which is prehistoric in this field. During that time, I have heard, from intelligent sources who were usually reliable, that the trucking industry was about to end, that China was in possession of a trillion-parameter natural language processing AI with superhuman intelligence. I have heard geniuses bona fide geniuses declare that medical schools should no longer teach radiology because it would all be automated soon.

One of the reasons AI doomerism bores me is that its become familiar Ive heard it all before. To stay sane, I have had to abide by twin principles: I dont believe it until I see it. Once I see it, I believe it.

Many of the most important engineers in the field indulge in AI doomerism; this is unquestionably true. But one of the defining features of our time is that the engineers who do not, in my experience, have even the faintest education in the humanities or even recognize that society and culture are worthy of study simply have no idea how their inventions interact with the world. One of the most prominent signatories of the open letter was Elon Musk, an early investor in OpenAI. He is brilliant at technology. But if you want to know how little he understands about people and their relationships to technology, go on Twitter for five minutes.

Not that there arent real causes of worry when it comes to AI; its just that theyre almost always about something other than AI. The biggest anxiety that an artificial general intelligence is about to take over the world doesnt even qualify as science fiction. That fear is religious.

Computers do not have will. Algorithms are a series of instructions. The properties that emerge in the emergent properties of artificial intelligence have to be discovered and established by human beings. The anthropomorphization of statistical pattern-matching machinery is storytelling; its a movie playing in the collective mind, nothing more. Turning off ChatGPT isnt murder. Engineers who hire lawyers for their chatbots are every bit as ridiculous as they sound.

The much more real anxieties brought up by the more substantial critics of artificial intelligence are that AI will super-charge misinformation and will lead to the hollowing out of the middle class by the process of automation. Do I really need to point out that both of these problems predate artificial intelligence by decades, and are political rather than technological?

AI might well make it slightly easier to generate fake content, but the problem of misinformation has never been generation but dissemination. The political space is already saturated with fraud and its hard to see how AI could make it much worse. In the first quarter of 2019, Facebook had to remove 2.2bn fake profiles; AI had nothing to do with it. The response to the degradation of our information networks from government and from the social media industry has been a massive shrug, a bunch of antiquated talk about the first amendment.

Regulating AI is enormously problematic; it involves trying to fathom the unfathomable and make the inherently opaque transparent. But we already know, and have known for over a decade, about the social consequences of social media algorithms. We dont have to fantasize or predict the effects of Instagram. The research is consistent and established: that technology is associated with higher levels of depression, anxiety and self-harm among children. Yet we do nothing. Vague talk about slowing down AI doesnt solve anything; a concrete plan to regulate social media might.

As for the hollowing out of the middle class, inequality in the United States reached the highest level since 1774 back in 2012. AI may not be the problem. The problem may be the foundational economic order AI is entering. Again, vague talk about an AI apocalypse is a convenient way to avoid talking about the self-consumption of capitalism and the extremely hard choices that self-consumption presents.

The way you can tell that doomerism is just more hype is that its solutions are always terminally vague. The open letter called for a six-month ban. What, exactly, do they imagine will happen over those six months? The engineers wont think about AI? The developers wont figure out ways to use it? Doomerism likes its crises numinous, preferably unsolvable. AI fits the bill.

Recently, I used AI to write a novella: The Death of an Author. I wont say that the experience wasnt unsettling. It was quite weird, actually. It felt like I managed to get an alien to write, an alien that is the sum total of our language. The novella itself has, to me anyway, a hypnotic but removed power inhuman language that makes sense. But the experience didnt make me afraid. It awed me. Lets reside in the awe for a moment, just a moment, before we go to the fear.

If we have to think through AI by way of the movies, can we at least do Star Trek instead of Terminator 2? Something strange has appeared in the sky lets be a little more Jean-Luc Picard and a little less Klingon in our response. The truth about AI is that nobody not the engineers who have created it, not the developers converting it into products understands fully what it is, never mind what its consequences will be. Lets get a sense of what this alien is before we blow it out of the sky. Maybe its beautiful.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Read the original post:

The apocalypse isnt coming. We must resist cynicism and fear about AI - The Guardian

The Potential of AI in Tax Practice Relies on Understanding its … – Thomson Reuters Tax & Accounting

Curiosity, conversation, and investment into artificial intelligence are quickly gaining traction in the tax community, but proper due diligence requires an acknowledgement of what such tools are and arent yet capable of, as well as an assessment of security and performance risks, according to industry experts.

With the tax world exploring how AI can improve practice and administration; firms, the IRS, and taxpayers alike are in the early stages of considering its potential for streamlining tasks, saving time, and improving access to information. Regardless of ones individual optimism or skepticism about the possible future of AI in the tax space, panelists at an American Bar Association conference in Washington, D.C., this past week suggested that practitioners arm themselves with important fundamentals and key technological differences under the broad-stroke term of AI.

An increasingly popular and publicly available AI tool is ChatGPT. Users can interact with ChatGPT by issuing whatever prompts come to mind, such as telling it to write a script for a screenplay or simply asking a question. As opposed to algorithmic machine learning tools specifically designed with a narrow focus, such as those in development at the IRS to crack down on abusive transactions like conservation easements, ChatGPT is what is called a large language model (LLM).

LLMs, according to PricewaterhouseCoopers Principal Chris Kontaridis, are text-based and use statistical methodologies to create a relationship between your question and patterns of data and text. In other words, the more data an LLM like ChatGPTwhich is currently learning from users across the entire internetabsorbs, the better it can attempt to predict and algorithmically interact with a person. Importantly, however, ChatGPT is not a knowledge model, Kontaridis said. Calling ChatGPT a knowledge model would insinuate that it is going to give you the correct answer every time you put in a question. Because it is not artificial general intelligence, something akin to a Hollywood portrayal of sentient machines overtaking humanity, users should recognize that ChatGPT is not self-reasoning, he said.

Were not even close to having real AGI out there, Kontaridis added.

Professor Abdi Aidid of the University of Toronto Faculty of Law and AI research-focused Blue J Legal, said at the ABA conference that the really important thing when youre using a tool like [ChatGPT] is recognizing its limitations. He explained that it is not providing source material for legal or tax advice. What its doing, and this is very important, is simply making a probabilistic determination about the next likely word. For instance, Aidid demonstrated that if you ask ChatGPT what your name is, it will give you an answer whether it knows it or not. You can rephrase the same question and ask it again, and it might give you a slightly different answer with different words because its responding to a different prompt.

At a separate panel, Ken Crutchfieldvice president and general manager of Legal Markets said he asked ChatGPT who invented the Trapper Keeper binder, knowing in fact his father Bryant Crutchfield is credited with the invention. ChatGPT spit out a random name. In telling the story, Crutchfield said: I went through, and I continued to ask questions, and I eventually convinced ChatGPT that it was wrong, and it admitted it and it said yes, Bryant Crutchfield did invent the Trapper Keeper.' Crutchfield said that when someone else tried asking ChatGPT who invented the Trapper Keeper, it gave yet another name. He tried it again himself more recently, and the answer included his fathers name, but listed his own alma mater. So its getting better and kind of learns through the these back-and-forths with people that are interacting.

Aidid explained that these instances are referred to as hallucinations. That is, when an AI does not know the answer and essentially makes something up on the spot based on the data and patterns it has up to that point. If a user were to ask ChatGPT about the Inflation Reduction Act, it would hallucinate an answer because it currently is limited to knowledge as recent as September 2021. Generative AI ChatGPT is still more sophisticated than more base-level tools that work off of decision trees, such as when a taxpayer interacts with the IRS Tax Assistant Tool, Aidid said. The Tax Assistant Tool, Aidid said, is not generative AI.

Mindy Herzfeld, professor at the University of Florida Levin College of Law, responsed that it is especially problematic because the [Tax Assistant Tool] is implying that it has all that information and its generating responses based on the world of information, but its really not doing that, so its misleading.

The most potential for the application of generative AI is with so-called deep learning tools, which are supposedly more advanced and complex iterations of machine learning platforms. Aidid said deep learning can work with unstructured data. Such technology can not only synthesize and review information, but review new information for us. Its starting to take all that and generate thingsnot simple predictionsbut actually generate things that are in the style and mode of human communication, and thats where were seeing significant investment today.

Herzfeld said that machine learning is already being used in tax on a daily basis, but deep learning is a little harder to see where that is in tax law. These more advanced tools will likely be developed in-house at firms, likely in partnership with AI researchers.

PwC is working with Blue J in pursuit of tax-oriented deep learning generative AI to help reduce much of the clerical work that is all too time-consuming in tax practice, according to Kontaridis. Freeing up staff to focus efforts to other things while AI sifts through mountains of data is a boon, he said.

However, as the saying goes, with great power comes with great responsibility. Here, that means guarding sensitive information and ensuring accuracy. Kontaridis said that its really important to make sure before you deploy something like this to your staff or use it yourself that youre doing it in a safe environment where you are protecting the confidentiality of your personal IP and privilege that you have with your clients.

Herzfeld echoed that practitioners should bear in mind how easily misinformation could be perpetuated through an overreliance or lack of oversight of AI, which she called a very broadly societal risk. Kontaridis assured the audience that he is not worried about generative AI replacing our role and the tax professional this is a tool that will help us do our work better.

Referring to the myth that CPA bots will take over the industry, he said: What Im worried about is the impact it has on our profession at the university level of it, discouraging bright young minds from pursuing careers in tax and accounting consulting.

Get all the latest tax, accounting, audit, and corporate finance news with Checkpoint Edge. Sign up for afree 7-day trialtoday.

Continued here:

The Potential of AI in Tax Practice Relies on Understanding its ... - Thomson Reuters Tax & Accounting

Operation HOPE and CAU Host ChatGPT Creator to Discuss AI – Black Enterprise

Operation HOPE recently partnered with Clark Atlanta University (CAU) to host two events focused on The Future of Artificial Intelligence, with Sam Altman, Open AI Founder and ChatGPT Creator. The conversations were led by Operation HOPE Founder, Chairman, and CEO John Hope Bryant and featured the President of Clark Atlanta University, Dr. GeorgeT.French,Jr.

Held on CAUs campus, the first event provided Atlantas most prominent Black leaders from the public and private sectors an opportunity to engage with Altman and discuss pressing issues around artificial intelligence (AI). The second discussion provided local HBCU and Atlanta-based college students with the same opportunity.

Altman, a billionaire tech pioneer, shared how he believes AI can positively impact lives and create new economic opportunities for communities of color, particularly among students at Historically Black Colleges and Universities (HBCUs). The standing-room-only event included representatives from government, technology, non-profit, education, and the creative industries, among others.

In 2015, Altman co-founded OpenAI, a nonprofit artificial intelligence research and deployment company with the stated mission, to ensure that artificial general intelligence highly autonomous systems that outperform humans at most economically valuable work benefits all of humanity. In partnership with Operation HOPE, serial entrepreneur Altman has committed to makingAI a force for good by stimulating economic growth, increasing productivity at lower costs and stimulating job creation.

The promise of an economic boost via machine learning is understandably seductive, but if we want to ensure AI technology has a positive impact, we must all be engaged early on.With proper policy oversight, I believe it can transform the future of the underserved, said Operation HOPE Chairman, Founder, and CEO John Hope Bryant. The purpose of this discussion is to discover new ways to leverage AI to win in key areas of economic opportunity such as education, housing, employment, and credit. If it can revolutionize business, it can do the same for our communities.

Getting this right by figuring out the new society that we want to build and how we want to integrate AI technology is one of the most important questions of our time, Altman said. Im excited to have this discussion with a diverse group of people so that we can build something that humanity as a whole wants and needs.

Throughout the event, Altman and Bryant demystified AI and how modern digital technology is revolutionizing the way todays businesses compete and operate. By putting AI and data at the center of their capabilities, companies are redefining how they create, capture, and share valueand are achieving impressive growth as a result. During the Q&A session, they also discussed how government agencies can address AI policies that will lead to more equitable outcomes.

Altman is an American entrepreneur, angel investor, co-founder of Hydrazine Capital, former president of Y Combinator, founder and former CEO of Loopt, and co-founder and CEO of OpenAI. He was also one of TIME Magazines 100 Most Influential People of 2023.

According to recent research by IBM, more than one in threebusinesses were using AI technology in 2022. The report also notes that the adoption rate is exponential, with 42% currently considering incorporating AI into their business processes. Other research suggests that although the public sector is lagging, an increasing number of government agencies are considering or starting to use AI to improve operational efficiencies and decision-making. (McKinsey, 2020)

Link:

Operation HOPE and CAU Host ChatGPT Creator to Discuss AI - Black Enterprise

AI can be transformative technology, only with appropriate restrictions and safeguards against malicious use – ZAWYA

Check Point Research (CPR), the Threat Intelligence arm of Check Point Software Technologies Ltd. (NASDAQ: CHKP) and a leading provider of cyber security solutions globally, warns that artificial intelligence has the potential to be a transformative technology that can significantly impact our daily lives, but only with appropriate bans and regulations in place to ensure AI is used and developed ethically and responsibly.

"AI has already shown its potential and has the possibility to revolutionize many areas such as healthcare, finance, transportation and more. It can automate tedious tasks, increase efficiency and provide information that was previously not possible. AI could also help us solve complex problems, make better decisions, reduce human error or tackle dangerous tasks such as defusing a bomb, flying into space or exploring the oceans. But at the same time, we see massive use of AI technologies to develop cyber threats as well," says Ram Narayanan, Country Manager at Check Point Software Technologies, Middle East. Such misuse of AI has been widely reported in the media, with select reports around ChatGPT being leveraged by cybercriminals to contribute to the creation of malware.

Overall, the development of AI is not just another passing craze, but it remains to be seen how much of a positive or negative impact it will have on society. And although AI has been around for a long time, 2023 will be remembered by the public as the "Year of AI". However, there continues to be a lot of hype around this technology and some companies may be overreacting. We need to have realistic expectations and not see AI as an automatic panacea for all the world's problems.

We often hear concerns of whether AI will approach or even surpass human capabilities. Predicting how advanced AI will be is difficult, but there are already several categories. Current AI is referred to as narrow or "weak" AI (ANI Artificial Narrow Intelligence). General AI (AGI Artificial General Intelligence) should function like the human brain, thinking, learning and solving tasks like a human. The last category is Artificial Super Intelligence (ASI) and is basically machines that are smarter than us.

If artificial intelligence reaches the level of AGI, there is a risk that it could act on its own and potentially become a threat to humanity. Therefore, we need to work on aligning the goals and values of AI with those of humans.

Ram Narayanan further states, To mitigate the risks associated with advanced AI, it is important that governments, companies and regulators work together to develop robust safety mechanisms, establish ethical principles and promote transparency and accountability in AI development. Currently, there is a minimum of rules and regulations. There are proposals such as the AI Act, but none of these have been passed and essentially everything so far is governed by the ethical compasses of users and developers. Depending on the type of AI, companies that develop and release AI systems should ensure at least minimum standards such as privacy, fairness, explainability or accessibility."

Unfortunately, AI can also be used by cybercriminals to refine their attacks, automatically identify vulnerabilities, create targeted phishing campaigns, socially engineer, or create advanced malware that can change its code to better evade detection. AI can also be used to generate convincing audio and video deepfakes that can be used for political manipulation, false evidence in criminal trials, or to trick users into paying money.

But AI is also an important aid in defending against cyberattacks in particular. For example, Check Point uses more than 70 different tools to analyse threats and protect against attacks, more than 40 of which are AI-based. These technologies help with behavioral analysis, analyzing large amounts of threat data from a variety of sources, including the darknet, making it easier to detect zero-day vulnerabilities or automate patching of security vulnerabilities.

"Various bans and restrictions on AI have also been discussed recently. In the case of ChatGPT, the concerns are mainly related to privacy, as we have already seen data leaks, nor is the age limit of users addressed. However, blocking similar services has only limited effect, as any slightly more savvy user can get around the ban by using a VPN, for example, and there is also a brisk trade in stolen premium accounts. The problem is that most users do not realise that the sensitive information entered into ChatGPT will be very valuable if leaked, and could be used for targeted marketing purposes. We are talking about potential social manipulation on a scale never seen before," points out Ram Narayanan.

The impact of AI on our society will depend on how we choose to develop and use this technology. It will be important to weigh the potential benefits and risks whilst striving to ensure that AI is developed in a responsible, ethical and beneficial way for society.

Read more:

AI can be transformative technology, only with appropriate restrictions and safeguards against malicious use - ZAWYA