Archive for the ‘Artificial General Intelligence’ Category

AI Singapore and the Digital and Intelligence Service Sign … – MINDEF Singapore

Senior Minister of State for Defence Mr Heng Chee How officiated the inaugural AI Student Developer Conference at the Lifelong Learning Institute today. Organised by AI Singapore (AISG) and attended by more than 300 participants, the conference allowed attendees to gain insights into Artificial Intelligence (AI) and the AI industry through panel discussions, interactive booths and workshops, as well as explore career opportunities with industry partners. As part of the conference, a Memorandum of Understanding (MOU) between AISG and the Singapore Armed Forces (SAF)'s Digital and Intelligence Service (DIS) was signed.

Delivering the opening address at the conference, Mr Heng said that, "Outside of the defence-specific sector partners, DIS is also enlarging its engagement with the wider technology ecosystem, including engagement with the commercial sector and academia This MOU is another example of DIS's pursuit in this direction of engagement, augmenting our ongoing efforts to build and sustain a strong and capable workforce and talent pipeline to strengthen and sharpen the SAF's digital cutting edge."

The MOU between AISG and the DIS was signed by Head of LearnAI at AISG, Mr Koo Sengmeng and DIS Chief Digitalisation Officer Military Expert 7 (ME7) Guo Jinghua. Senior Director of AI Governance at AISG, Prof Simon Chesterman and Chief of Digital and Intelligence Service/Director Military Intelligence, Brigadier-General Lee Yi-Jin witnessed the signing of the MOU, which formalises the collaboration in deepening national AI expertise for Singapore's digital defence.

The MOU will further collaboration and strengthen the DIS's capability development in Data Science and AI (DSAI). The DIS will need to keep pace with, and agilely harness the rapid pace of AI innovation in academia and industry, to complement the strong AI capabilities of the Defence Technology Community. This is crucial for the DIS to better exploit the vast and growing volume of data in the digital domain, and effectively detect and respond to the increasing digital threats facing Singapore and Singaporeans. The DIS will leverage AISG's industry and talent development programmes including the 100 Experiments (100E) and AI Apprenticeship Programme (AIAP) to expand the DIS's capacity to deploy advance AI techniques, such as the use of Large Language Models and Reinforcement Learning, and integrate them into operations of the DIS and the SAF.

The DIS will also work with AISG to develop and expand its workforce. Through the introduction of AISG's LearnAI courses, the DIS will expand its course offerings for DIS personnel's professional upskilling. The DIS will also leverage AISG's existing networks of students to sustain the DSAI talent pipeline, while supporting AISG's mandate of growing and developing a national digital workforce. The DIS will enable national talents in AISG's AIAP, who are undergoing AI deep-skilling, to contribute to national defence via their involvement in the various projects supporting the DIS. The DIS will also offer employment opportunities to these talents where suitable. In addition, AISG will share about National Service (NS) and career opportunities in the DIS, such as the Digital Work-Learn Scheme[1], with students from the AISG Student Outreach Programme.

Highlighting the importance of the MOU for Singapore's digital defence, Mr Koo said, "Our partnership with the DIS will ensure that Singapore has a robust and resilient pipeline of AI talents that have knowledge of issues related to national defence and possess the relevant expertise to protect our digital borders and safeguard Singapore. We look forward to working closely with the DIS to collectively deepen the core competencies of our next-generation Singapore Armed Forces to stay ahead of the threats of tomorrow."

ME7 Guo said, "The DIS and AISG are working towards our common goal of strengthening digital capabilities to safeguard Singapore. The effective use of AI is crucial for the SAF's mission success. We need to better reap the dynamic AI innovations in academia and industry, and integrate them into SAF operations. Our partnership with AISG is therefore an important part of our approach to leverage cutting-edge AI innovations. Beyond AI capability development, our partnership with AISG will help grow the DIS digital fighting force to defend Singapore in the digital domain, and contribute to the national AI talent pipeline through various schemes as the Digital Work-Learn Scheme."

[1]Servicemen under the WLS will serve for four years as Digital Specialists in the SAF, in a combination of full-time National Service and Regular service, developing data science, software development and AI skills through vocational, on-the-job and academic training.

About AI Singapore

AI Singapore (AISG) is a national AI programme launched by the National Research Foundation (NRF), Singapore to anchor deep national capabilities in artificial intelligence (AI) to create social and economic impacts through AI, grow the local talent, build an AI ecosystem, and put Singapore on the world map.

AISG brings together Singapore-based research institutions and the vibrant ecosystem of AI start-ups and companies developing AI products to perform applications-inspired research, grow the knowledge, create the tools, and develop the talent to power Singapore's AI efforts.

AISG is driven by a government-wide partnership comprising NRF, the Smart Nation and Digital Government Office (SNDGO), Economic Development Board (EDB), Infocomm Media Development Authority (IMDA), SGInnovate, and the Integrated Health Information Systems (IHiS).

Details of some of its programmes can be found below:

-100 Experiments (100E)

-AI Apprenticeship Programme (AIAP)

-LearnAI

For more information on AISG and its programmes, please visit: http://www.aisingapore.org

AI Singapore's Social Media Channels:

Facebook: https://www.facebook.com/groups/aisingapore

Instagram: @ai_singapore

LinkedIn: https://www.linkedin.com/company/aisingapore/

Twitter: https://twitter.com/AISingapore

About The DIS

As part of the transformation of the Next Generation SAF, the Digital and Intelligence Service, the fourth Service of the Singapore Armed Forces (SAF) was established in 2022. The DIS sees the consolidation and integration of existing Command, Control, Communications, Computers and Intelligence (C4I) as well as cyber capabilities of the SAF. As a dedicated Service, the DIS will raise, train and sustain digital forces and capabilities to fulfil its mission to defend the peace and security of Singapore from the evolving and increasingly complex threats in the digital domain.

The mission of the DIS is to defend and dominate in the digital domain. As part of an integrated SAF, the DIS will enhance Singapore's security, from peace to war. The DIS plays a critical role in defending Singapore from threats in the digital domain, and allows the SAF to operate better as a networked and integrated force to deal with a wider spectrum of external threats to enhance and safeguard Singapore's peace and sovereignty. The DIS collaborates with partners across the MINDEF, SAF, Whole-of-Government agencies and like-minded partners in academia and industry in defending our nation against threats in the digital domain.

Building a highly-skilled digital workforce is key to the digital defence strategy of the SAF. The DIS continually attracts and develops both military and non-uniformed digital experts to grow the SAF's digital workforce.

The DIS leverages our National Servicemen to develop its digital workforce. Operationally Ready National Servicemen (ORNS) with matching talents and relevant civilian expertise may also express interest to serve in the DIS through the Enhanced Expert Deployment Scheme (EEDS). Full-time National Servicemen (NSFs) with suitable skills are offered to participate in DIS-related Work-Learn Schemes (WLS) where they will be able to undergo military training and serve NS while attaining academic credits which will contribute to the eventual completion of a relevant university degree. There are currently two DIS WLS, namely the Digital WLS and Cyber WLS.

For more information on the DIS and its careers, please visit: http://www.mindef.gov.sg/dis

The Digital and Intelligence Service's Social Media Channels:

Facebook: https://www.facebook.com/thesingaporeDIS

Instagram: @thesingaporedis

LinkedIn: https://www.linkedin.com/company/digital-and-intelligence-service

Twitter: @thesingaporeDIS

See the rest here:

AI Singapore and the Digital and Intelligence Service Sign ... - MINDEF Singapore

AI robots figure out how to play football in shambolic footage – The Independent

For free real time breaking news alerts sent straight to your inbox sign up to our breaking news emailsSign up to our free breaking news emails

Robots fitted with AI developed by Googles DeepMind have figured out how to play football.

The miniature humanoid robots, which are about knee height, were able to make tackles, score goals and easily recover from falls when tripped.

In order to learn how to play, AI researchers first used DeepMinds state-of-the-art MuJoCo physics engine to train virtual versions of the robots in decades of match simulations.

The simulated robots were rewarded if their movements led to improved performance, such as winning the ball from an opponent or scoring a goal.

Once they were sufficiently capable of performing the basic skills, DeepMind researchers then transferred the AI into real-life versions of the bipedal bots, who were able to play one-on-one games of football against each other with no additional training required.

The trained soccer players exhibit robust and dynamic movement skills, such as rapid fall recovery, walking, turning, kicking and more, DeepMind noted in a blog post.

The agents also developed a basic strategic understanding of the game, and learned, for instance, to anticipate ball movements and to block opponent shots.

Although the robots are inherently fragile, minor hardware modifications, together with basic regularisation of the behaviour during training led the robots to learn safe and effective movements while still performing in a dynamic and agile way.

A paper detailing the research, titled Learning agile soccer skills for a bipedal robot with deep reinforcement learning, is currently under peer-review.

Previous DeepMind research on football-playing AI has used different team set ups, increasing the number of players in order to teach simulated humanoids how to work as a team.

The researchers say the work will not only advance coordination between AI systems, but also offer new pathways towards building artificial general intelligence (AGI) that is of an equivalent or superiour level to humans.

Continued here:

AI robots figure out how to play football in shambolic footage - The Independent

MIT Professor Compares Ignoring AGI to Don’t Look Up – Futurism

MIT professor and AI researcher Max Tegmark is pretty stressed out about the potential impact of artificial general intelligence (AGI) on human society. In anew essay for Time, he rings the alarm bells, painting a pretty dire picture of a future determined by an AI that can outsmart us.

"Sadly, I now feel that we're living the movie 'Don't Look Up' for another existential threat: unaligned superintelligence," Tegmark wrote, comparing what he perceives to be a lackadaisical response to a growing AGI threat to director Adam McKay's popular climate change satire.

For those who haven't seen it, "Don't Look Up" is a fictional story about a team of astronomers who, after discovering that a species-destroying asteroid is hurtling towards Earth, set out to warn the rest of human society. But to their surprise and frustration, a massive chunk of humanity doesn't care.

The asteroid is one big metaphor for climate change. But Tegmark thinks that the story can apply to the risk of AGI as well.

"A recent survey showed that half of AI researchers give AI at least ten percent chance of causing human extinction," the researcher continued. "Since we have such a long history of thinking about this threat and what to do about it, from scientific conferences to Hollywood blockbusters, you might expect that humanity would shift into high gear with a mission to steer AI in a safer direction than out-of-control superintelligence."

"Think again," he added, "instead, the most influential responses have been a combination of denial, mockery, and resignation so darkly comical that it's deserving of an Oscar."

In short, according to Tegmark, AGI is a very real threat, and human society isn't doing nearly enough to stop it or, at the very least, isn't ensuring that AGI will be properly aligned with human values and safety.

And just like in McKay's film, humanity has two choices: begin to make serious moves to counter the threat or, if things go the way of the film, watch our species perish.

Tegmark's claim is pretty provocative, especially considering that a lot of experts out there either don't agreethat AGI will ever actually materialize, or argue that it'll take a very long time to get there, if ever. Tegmark does address this disconnect in his essay, although his argument arguably isn't the most convincing.

"I'm often told that AGI and superintelligence won't happen because its impossible: human-level Intelligence is something mysterious that can only exist in brains," Tegmark writes. "Such carbon chauvinism ignores a core insight from the AI revolution: that intelligence is all about information processing, and it doesnt matter whether the information is processed by carbon atoms in brains or by silicon atoms in computers."

Tegmark goes as far as to claim that superintelligence "isn't a long-term issue," but is even "more short-term than e.g. climate change and most people's retirement planning." To support his theory, the researcher pointed to a recent Microsoft study arguing that OpenAI's large language model GPT-4 is already showing "sparks" of AGI and a recent talk given by deep learning researcher Yoshua Bengio.

While the Microsoft study isn't peer-reviewed and arguably reads more like marketing material, Bengio's warning is much more compelling. His call to action is much more grounded in what we don't know about the machine learning programs that already exist, as opposed to making big claims about tech that does not yet exist.

To that end, the current crop of less sophisticated AIs already poses a threat, from misinformation-spreading synthetic content to the threat of AI-powered weaponry.

And the industry at large, as Tegmark further notes, hasn't exactly done an amazing job so far at ensuring a slow and safe development, arguing that we shouldn't have taught it how to code, connect it to the internet, or give it a public API.

Ultimately, if and when AGI might come to fruition is still unclear.

While there's certainly a financial incentive for the field to keep moving quickly, a lot of experts agree that we should slow down the development of more advanced AIs, regardless of whether AGI is around the corner or still lightyears away.

And in the meantime, Tegmark argues that we should agree there's a very real threat in front of us before it's too late.

"Although humanity is racing toward a cliff, we're not there yet, and there's still time for us to slow down, change course and avoid falling off and instead enjoying the amazing benefits that safe, aligned AI has to offer," Tegmark writes. "This requires agreeing that the cliff actually exists and falling off of it benefits nobody."

"Just look up!" he added.

More on AI: Elon Musk Says He's Building a "Maximum Truth-Seeking AI"

See the rest here:

MIT Professor Compares Ignoring AGI to Don't Look Up - Futurism

Meet the Greta Thunberg of AI – POLITICO – POLITICO

With help from Derek Robertson and Sam Sutton

Sneha Revanur speaking in 2022. | Getty Images for Unfinished Live

Parents just dont understand the risks of generative artificial intelligence. At least according to a group of Zoomers grappling with this new force that their elders are struggling to regulate.

While young people often bear the brunt of new technologies, and must live with their long-term consequences, no youth movement has emerged around tech regulation that matches the scope or power of youth climate and gun control activism.

Thats starting to change, though, especially as concerns about AI mount.

Earlier today, a consortium of 10 youth organizations sent a letter to congressional leaders and the White House Office of Science and Technology Policy calling on them to include more young people on AI oversight and advisory boards.

The letter, provided first to DFD, was spearheaded by Sneha Revanur, a first-year student at Williams College in Massachusetts and the founder of Encode Justice, an AI-focused civil society group. As a charismatic teenager who is not shy about condemning a generation of policymakers who are out of touch, as she put it in an interview, shes the closest thing the emerging movement to rein in AI has to its own Greta Thunberg. Thunberg began her rise as a global icon of the climate movement in 2018, at the age of 15, with weekly solo protests outside of Swedens parliament.

A native of San Jose in the heart of Silicon Valley, Revanur also got her start in tech advocacy as a 15-year-old. In 2020, she volunteered for the successful campaign to defeat Californias Proposition 25, which would have enshrined the replacement of cash bail with a risk-based algorithmic system.

Encode Justice emerged from that ballot campaign with a focus on the use of AI algorithms in surveillance and the criminal justice system. It currently boasts a membership of 600 high school and college students across 30 countries. Revanur said the groups primary source of funding currently comes from the Omidyar Network, a self-described social change venture led by left-leaning eBay founder Pierre Omidyar.

Revanur has become increasingly preoccupied with generative AI as it sends ripples through societies across the world. The aha moment came when she read that February New York Times article about a seductive, conniving AI chatbot. In recent weeks, concerns have only grown about the potential for generative AI to deceive and manipulate people, as well as the broader risks posed by the potential development of artificial general intelligence.

We were somewhat skeptical about the risks of generative AI, Revanur says. We see this open letter as a marking point that were pivoting.

The letter is borne in part out of concerns that older policymakers are ill-prepared to handle this rapidly developing technology. Revanur said that when she meets with congressional offices, she is struck by the lack of tech-specific expertise. Were almost always speaking to a judiciary staffer or a commerce staffer. State legislatures, she said, tend to be worse.

One sign of the generational tension at play: Todays letter calls on policymakers to improve technical literacy in government.

The letter comes at a time when the fragmented youth tech movement is starting to coalesce, according to Zamaan Qureshi, co-chair of Design It For Us Coalition, a signatory of the AI letter.

The groups that are out there have been working in a disjointed way, Qureshi, a junior at American University in Washington, said. The coalition grew out of a successful campaign last year in support of the California Age Appropriate Design Code, a state law governing online privacy for children.

To improve coordination on tech safety issues, Qureshi and a group of fellow activists launched the Design It For Us Coalition at the end of March with a kickoff call featuring advisory board member Frances Haugen, the Facebook whistleblower. The coalition is currently focused on social media, which is often blamed for a teen mental health crisis, Qureshi said.

But its the urgency of AI that prompted todays letter.

So, is this the issue that will catapult youth tech activists to the same visibility and influence of other youth movements?

Qureshi said he and his fellow organizers have been in touch with youth climate activists and with organizers from March for Our Lives, the student-led gun control organization.

And the tech activists are looking to push their weight around in 2024.

Revanur, who praised President Joe Biden for prioritizing tech regulation, said Encode Justice plans to make an endorsement in the upcoming presidential race, and is watching to see what his administration does on AI. The group is also considering congressional and state legislative endorsements.

But endorsements and a politely-worded letter are a far cry from the combative and controversial tactics that have put the youth climate movement in the spotlight, such as a 2019 confrontation with Democratic Sen. Dianne Feinstein inside her Bay Area office.

Tech activists remain open to the adversarial approach. Revanur said the risks of AI run amuck could justify more confrontational measures going forward.

We definitely do see ourselves expanding direct action, she said, because we have youth on the ground.

BEVERLY HILLS Digital money is here to stay, International Monetary Fund Managing Director Kristalina Georgieva said at the Milken Global Institutes annual conference today. But if people expect central bank digital currencies to upend the banking sector, they shouldnt hold their breath.

Georgieva splashed cold water on a retail CBDC which refers to tokens issued directly to the public while offering a tacit endorsement of wholesale digital currencies that could be used by banks.

We think that wholesale CBDCs can be put in place with fairly little space for undesirable surprises, she said. Retail CBDCs, on the other hand, could completely transform the financial system in a way that we dont quite know what consequences he could bring. Sam Sutton

AIs medical takeover continues apace: Todays Future Pulse newsletter reveals the results of a new study showing that ChatGPT might give real-life doctors a run for their money when it comes to bedside manner.

The study, published in JAMA Internal Medicine, took 195 question-and-answer pairings from the popular subreddit r/AskDocs, ran the same questions by ChatGPT, and then had a panel of five experts evaluate whether the real-life doctors or the AI platform gave a better response. It was no contest: The experts found that 78 percent of the time ChatGPT prevailed.

And not only that, its responses were also rated significantly more empathetic than physician responses, by a factor of almost ten. The researchers suggest using the findings not to replace, but to augment doctor-patient interactions, writing that it could be used in scenarios such as using [a] chatbot to draft responses that physicians could then edit, and that Randomized trials could assess further if using AI assistants might improve responses, lower clinician burnout, and improve patient outcomes.

Stay in touch with the whole team: Ben Schreckinger ([emailprotected]); Derek Robertson ([emailprotected]); Mohar Chatterjee ([emailprotected]); Steve Heuser ([emailprotected]); and Benton Ives ([emailprotected]). Follow us @DigitalFuture on Twitter.

If youve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

The rest is here:

Meet the Greta Thunberg of AI - POLITICO - POLITICO

The future of generative AI is niche, not generalized – MIT Technology Review

ChatGPT has sparked speculation about artificial general intelligence. But the next real phase of AI will be in specific domains and contexts.

The relentless hype surrounding generative AI in the past few months has been accompanied by equally loud anguish over the supposed perils just look at the open letter calling for a pause in AI experiments. This tumult risks blinding us to more immediate risks think sustainability and bias and clouds our ability to appreciate the real value of these systems: not as generalist chatbots, but instead as a class of tools that can be applied to niche domains and offer novel ways of finding and exploring highly specific information.

This shouldnt come as a surprise. The news that a dozen companies have developed ChatGPT plugins is a clear demonstration of the likely direction of travel. A generalized chatbot wont do everything for you, but if youre, say, Expedia, being able to offer customers a simple way to organize their travel plans is undeniably going to give you an edge in a marketplace where information discovery is so important.

Whether or not this really amounts to an iPhone moment or a serious threat to Google search isnt obvious at present while it will likely push a change in user behaviors and expectations, the first shift will be organizations pushing to bring tools trained on large language models (LLMs) to learn from their own data and services.

Dont settle for half the story.Get paywall-free access to technology news for the here and now.

And this, ultimately, is the key the significance and value of generative AI today is not really a question of societal or industry-wide transformation. Its instead a question of how this technology can open up new ways of interacting with large and unwieldy amounts of data and information.

OpenAI is clearly attuned to this fact and senses a commercial opportunity: although the list of organizations taking part in the ChatGPT plugin initiative is small, OpenAI has opened up a waiting list where companies can sign up to gain access to the plugins. In the months to come, we will no doubt see many new products and interfaces backed by OpenAIs generative AI systems.

While its easy to fall into the trap of seeing OpenAI as the sole gatekeeper of this technology and ChatGPT as the go-to generative AI tool this fortunately is far from the case. You dont need to sign up on a waiting list or have vast amounts of cash available to hand over to Sam Altman; instead, its possible to self-host LLMs.

This is something were starting to see at Thoughtworks. In the latest volume of the Technology Radar our opinionated guide to the techniques, platforms, languages and tools being used across the industry today weve identified a number of interrelated tools and practices that indicate the future of generative AI is niche and specialized, contrary to what much mainstream conversation would have you believe.

Unfortunately, we dont think this is something many business and technology leaders have yet recognized. The industrys focus has been set on OpenAI, which means the emerging ecosystem of tools beyond it exemplified by projects like GPT-J and GPT Neo and the more DIY approach they can facilitate have so far been somewhat neglected. This is a shame because these options offer many benefits. For example, a self-hosted LLM sidesteps the very real privacy issues that can come from connecting data with an OpenAI product. In other words, if you want to deploy an LLM to your own enterprise data, you can do precisely that yourself; it doesnt need to go elsewhere. Given both industry and public concerns with privacy and data management, being cautious rather than being seduced by the marketing efforts of big tech is eminently sensible.

A related trend weve seen is domain-specific language models. Although these are also only just beginning to emerge, fine-tuning publicly available, general-purpose LLMs on your own data could form a foundation for developing incredibly useful information retrieval tools. These could be used, for example, on product information, content, or internal documentation. In the months to come, we think youll see more examples of these being used to do things like helping customer support staff and enabling content creators to experiment more freely and productively.

If generative AI does become more domain-specific, the question of what this actually means for humans remains. However, Id suggest that this view of the medium-term future of AI is a lot less threatening and frightening than many of todays doom-mongering visions. By better bridging the gap between generative AI and more specific and niche datasets, over time people should build a subtly different relationship with the technology. It will lose its mystique as something that ostensibly knows everything, and it will instead become embedded in our context.

Indeed, this isnt that novel. GitHub Copilot is a great example of AI being used by software developers in very specific contexts to solve problems. Despite its being billed as your AI pair programmer, we would not call what it does pairing its much better described as a supercharged, context-sensitive Stack Overflow.

As an example, one of my colleagues uses Copilot not to do work but as a means of support as he explores a new programming language it helps him to understand the syntax or structure of a language in a way that makes sense in the context of his existing knowledge and experience.

We will know that generative AI is succeeding when we stop noticing it and the pronouncements about what it might do die down. In fact, we should be willing to accept that its success might actually look quite prosaic. This shouldnt matter, of course; once weve realized it doesnt know everything and never will that will be when it starts to become really useful.https://wp.technologyreview.com/wp-content/uploads/2022/04/Thoughtworks_Video_ContributedArticle_April2022.mp4Provided by Thoughtworks

This content was produced by Thoughtworks. It was not written by MIT Technology Reviews editorial staff.

Follow this link:

The future of generative AI is niche, not generalized - MIT Technology Review