Archive for the ‘Ai’ Category

NGA, SLU, NGA Host AI-Focused Geo-Resolution Conference – Saint Louis University

ST. LOUIS, MO The National Geospatial-Intelligence Agency and Saint Louis University will co-host the Geo-Resolution 2023 conference, Digital Transformations: Navigating a World of Data from Seabed to Space, Thursday, Sept. 28, at SLUs Busch Student Center. This years theme will focus on the impact of artificial intelligence and new digital technologies for geospatial research and analysis.

Geo-Resolution is an annual conference that encourages collaboration between government, academic and industry partners to foster geospatial technology innovation and applications, connect geospatial experts and students and grow the geospatial ecosystem in the greater St. Louis region.

Geo-Resolution 2023 discussions will include:

Geo-Resolution is designed to provide participants particularly students access to geospatial experts from government, academia, innovation hubs, start-up companies and nonprofit organizations. Students will be able to meet local leaders from industry, academia and government to explore geospatial career opportunities.

This years conference will feature:

The conference will also include a Young Mentors panel, a student poster session, a student geospatial career fair and networking opportunities.

Geo-Resolution 2023 is free and open to the public. The conference will be held in-person at Saint Louis University and streamed live on the conference website.

Advance registration is required.

Register to Attend

NGA delivers world-class geospatial intelligence that provides a decisive advantage to policymakers, warfighters, intelligence professionals and first responders.

NGA is a unique combination of intelligence agency and combat support agency. It is the world leader in timely, relevant, accurate and actionable geospatial intelligence. NGA enables the U.S. intelligence community and the Department of Defense to fulfill the presidents national security priorities to protect the nation.

For more information about NGA, visit us online at http://www.nga.mil, on Instagram, LinkedIn, Facebook and Twitter.

Founded in 1818, Saint Louis University is one of the nations oldest and most prestigious Catholic institutions. Rooted in Jesuit values and its pioneering history as the first university west of the Mississippi River, SLU offers more than 13,500 students a rigorous, transformative education of the whole person. At the core of the Universitys diverse community of scholars is SLUs service-focused mission, which challenges and prepares students to make the world a better, more just place.

Originally posted here:

NGA, SLU, NGA Host AI-Focused Geo-Resolution Conference - Saint Louis University

VeChain and SingularityNET team up on AI to fight climate change – Cointelegraph

Artificial intelligence firm SingularityNET and blockchain firm VeChain have become the latest firms to marry blockchain with artificial intelligence this time, with the aim of cutting down carbon emissions.

Over the last year, the crypto industry has seen an increasing amount of collaboration between blockchain and AI technology.

On Aug. 24, VeChain a smart contract-compatible blockchain used for supply-chain tracking announced a strategic collaboration with the decentralized AI services-sharing platform SingularityNET.

In a joint statement, the firms said the partnership will merge VeChains enterprise data with SingularityNET's advanced AI algorithms to enhance automation of manual processes and provide real-time data.

SingularityNET founder and CEO Ben Goertzel told Cointelegraph that blockchain and AI go hand-in-hand and can solve problems where traditional approaches often fail.

The last few years have taught the world that when the right AI algorithms meet the right data on sufficient processing power, magic can happen, said Goertzel.

Goertzel explained the partnership could, for example, allow AI to identify new ways to use VeChains blockchain data to optimize carbon emission output and minimize pollution.

Achieving a sustainable and environmentally positive economy is an extremely complex problem involving coordination of a large number of different economic players, he added.

Meanwhile, VeChain Chief Technology Officer Antonio Senatore added: Blockchain and AI offer game-changing capabilities for industries and enterprises and are opening new avenues of operation.

Related: Heres how blockchain and AI combine to redefine data security

In July, Bitcoin Miner Hive Blockchain changed its name and business strategy as part of its foray into the emerging field of AI.Hive Digital Technologies CEO Aydin Kilictold Cointelegraph in August that blockchain and AI are both pillars of Web3.

In June, Ethereum layer-2 scaling network Polygon announced its integration of AI technology. The AI interface called Polygon Copilot will help developers obtain analytics and insights for Dapps on the network.

Dr. Daoyuan Wu, an AI researcher from the Nanyang Technological University in Singapore and MetaTrust affiliate, told Cointelegraph that the inherent autonomy of AI aligns seamlessly with the decentralized and autonomous characteristics of blockchain and smart contracts, adding:

MetaTrust Labs is working on a project called GPTScan which works as a tool that combines Generative Pre-training Transformer (GPT) and static analysis to detect logic vulnerabilities in smart contracts.

GPTScan is the first tool of its kind that utilizes GPT to match candidate vulnerable functions based on code-level scenarios and properties," added Dr. Daoyuan in an interview with Cointelegraph.

Collect this article as an NFT to preserve this moment in history and show your support for independent journalism in the crypto space.

Magazine: How to prevent AI from annihilating humanity using blockchain

View original post here:

VeChain and SingularityNET team up on AI to fight climate change - Cointelegraph

Using Generative AI to Resurrect the Dead Will Create a Burden for … – WIRED

Given enough data, one can feel like its possible to keep dead loved ones alive. With ChatGPT and other powerful large language models, it is feasible to create a more convincing chatbot of a dead person. But doing so, especially in the face of scarce resources and inevitable decay, ignores the massive amounts of labor that go into keeping the dead alive online.

Someone always has to do the hard work of maintaining automated systems, as demonstrated by the overworked and underpaid annotators and content moderators behind generative AI, and this is also true where replicas of the dead are concerned. From managing a digital estate after gathering passwords and account information, to navigating a slowly-decaying inherited smart home, digital death care practices require significant upkeep. Content creators depend on the backend labor of caregivers and a network of human and nonhuman entities, from specific operating systems and devices to server farms, to keep digital heirlooms alive across generations. Updating formats and keeping those electronic records searchable, usable, and accessible requires labor, energy, and time. This is a problem for archivists and institutions, but also for individuals who might want to preserve the digital belongings of their dead kin.

And even with all of this effort, devices, formats, and websites also die, just as we frail humans do. Despite the fantasy of an automated home that can run itself in perpetuity or a website that can survive for centuries, planned obsolescence means these systems will most certainly decay. As people tasked with maintaining the digital belongings of dead loved ones can attest, there is a stark difference between what people think they want, or what they expect others to do, and the reality of what it means to help technologies persist over time. The mortality of both people and technology means that these systems will ultimately stop working.

Early attempts to create AI-backed replicas of dead humans certainly bear this out. Intellitars Virtual Eternity, based in Scottsdale, Arizona, launched in 2008 and used images and speech patterns to simulate a humans personality, perhaps filling in for someone at a business meeting or chatting with grieving loved ones after a persons death. Writing for CNET, a reviewer dubbed Intellitar the product most likely to make children cry. But soon after the company went under in 2012, its website disappeared. LifeNaut, a project backed by the transhumanist organization Terasemwhich is also known for creating BINA48, a robotic version of Bina Aspen, the wife of Terasems founderwill purportedly combine genetic and biometric information with personal datastreams to simulate a full-fledged human being once technology makes it possible to do so. But the projects site itself relies on outmoded Flash software, indicating that the true promise of digital immortality is likely far off and will require updates along the way.

With generative AI, there is speculation that we might be able to create even more convincing facsimiles of humans, including dead ones. But this requires vast resources, including raw materials, water, and energy, pointing to the folly of maintaining chatbots of the dead in the face of catastrophic climate change. It also has astronomical financial costs: ChatGPT purportedly costs $700,000 a day to maintain, and will bankrupt OpenAI by 2024. This is not a sustainable model for immortality.

There is also the question of who should have the authority to create these replicas in the first place: a close family member, an employer, a company? Not everyone would want to be reincarnated as a chatbot. In a 2021 piece for the San Francisco Chronicle, the journalist Jason Fagone recounts the story of a man named Joshua Barbeau who produced a chatbot version of his long-dead fiance Jessica using OpenAIs GPT-3. It was a way for him to cope with death and grief, but it also kept him invested in a close romantic relationship with a person who was no longer alive. This was also not the way that Jessicas other loved ones wanted to remember her; family members opted not to interact with the chatbot.

Go here to read the rest:

Using Generative AI to Resurrect the Dead Will Create a Burden for ... - WIRED

Warner Calls on Biden Administration to Remain Engaged in AI … – Senator Mark Warner

WASHINGTON U.S. Sen. Mark R.Warner(D-VA), Chairman of the Senate Select CommitteeonIntelligence,today urged the Bidenadministration to build on itsrecently announced voluntary commitmentsfrom several prominent artificial intelligence (AI) leaders in order to promote greater security, safety, and trust in the rapidly developing AI field.

As AI is rolled out more broadly, researchers have repeatedly demonstrated a number of concerning, exploitable weaknesses inprominentproducts, including abilitiestogenerate credible-seeming misinformation, developmalware,and craftsophisticatedphishing techniques. On Friday, the Biden administration announced that several AI companies had agreed to a series of measures that would promotegreatersecurity and transparency. Sen. Warner wrote to the administration to applaud these efforts and laid out a series of next steps to bolster this progress, including extending commitments to less capable models, seeking consumer-facing commitments, anddeveloping an engagement strategy to better addresssecurity risks.

These commitments have the potential to shape developer norms and best practices associated with leading-edge AI models. At the same time, even less capable models are susceptible to misuse, security compromise, and proliferation risks,Sen.Warnerwrote.As the current commitments stand, leading vendors do not appear inclined to extending these vital development commitments to the wider range of AI products they have released that fall below this threshold or have been released as open source models.

The letter builds on Sen. Warners continued advocacy for the responsible development and deployment of AI. InApril, Sen.Warnerdirectlyexpressed concerns to several AI CEOs about the potential risks posed byAI,and calledoncompaniestoensure that their productsandsystems are secure.

The letter also affirms Congress role in regulating AI, and expands on the annualIntelligence Authorization Act, legislation that recently passed unanimously through the Sente Select Committee on Intelligence. Sen. Warner urges theadministrationto adopt the strategy outlined in this pending bill as well as work with the FBI, CISA, ODNI, and other federal agencies to fully address the potential risks of AI technology.

Sen.Warner, a former tech entrepreneur,has been a vocal advocate for Big Tech accountabilityanda stronger national posture against cyberattacksandmisinformationonline. In addition to his April letters, has introduced several pieces of legislationaimed at addressing these issues, including theRESTRICT Act, which would comprehensively address theongoing threat posed by technology from foreign adversaries; theSAFE TECH Act,which would reform Section230andallow social mediacompaniestobe held accountable for enabling cyber-stalking,online harassment,anddiscriminationonsocial media platforms;andtheHonest Ads Act, which would requireonline political advertisementstoadheretothe same disclaimer requirements as TV, radio,andprintads.

A copy of thelettercan be foundhereandbelow.

Dear President Biden,

I write to applaud the Administrations significant efforts to secure voluntary commitments from leading AI vendors related to promoting greater security, safety, and trust through improved development practices. These commitments largely applicable to these vendors most advanced products can materially reduce a range of security and safety risks identified by researchers and developers in recent years. In April, I wrote to a number of these same companies, urging them to prioritize security and safety in their development, product release, and post-deployment practices. Among other things, I asked them to fully map dependencies and downstream implications of compromise of their systems; focus greater financial, technical and personnel resources on internal security; and improve their transparency practices through greater documentation of system capabilities, system limitations, and training data.

These commitments have the potential to shape developer norms and best practices associated with leading-edge AI models. At the same time, even less capable models are susceptible to misuse, security compromise, and proliferation risks. Moreover, a growing roster of highly-capable open source models have been released to the public and would benefit from similar pre-deployment commitments contained in a number of the July 21stobligations. As the current commitments stand, leading vendors do not appear inclined to extending these vital development commitments to the wider range of AI products they have released that fall below this threshold or have been released as open source models.

To be sure, responsibility ultimately lies with Congress to develop laws that advance consumer and patient safety, address national security and cyber-crime risks, and promote secure development practices in this burgeoning and highly consequential industry and in the downstream industries integrating their products. In the interim, the important commitments your Administration has secured can be bolstered in a number of important ways.

First, I strongly encourage your Administration to continue engagement with this industry to extend these all of these commitments more broadly to less capable models that, in part through their wider adoption, can produce the most frequent examples of misuse and compromise.

Second, it is vital to build on these developer- and researcher-facing commitments with a suite of lightweight consumer-facing commitments to prevent the most serious forms of abuse. Most prominent among these should be commitments from leading vendors to adopt development practices, licensing terms, and post-deployment monitoring practices that prevent non-consensual intimate image generation, social-scoring, real-time facial recognition (in contexts not governed by existing legal protections or due process safeguards), and proliferation activity in the context of malicious cyber activity or the production of biological or chemical agents.

Lastly, the Administrations successful high-level engagement with the leadership of these companies must be complemented by a deeper engagement strategy to track national security risks associated with these technologies. In June, the Senate Select Committee on Intelligence on a bipartisan basis advanced our annualIntelligence Authorization Act, a provision of which directed the President to establish a strategy to better engage vendors, downstream commercial users, and independent researchers on the security risks posed by, or directed at, AI systems.

This provision was spurred by conversations with leading vendors, who confided that they would not know how best to report malicious activity such as suspected intrusions of their internal networks, observed efforts by foreign actors to generate or refine malware using their tools, or identified activity by foreign malign actors to generate content to mislead or intimidate voters. To be sure, a highly-capable and well-established set of resources, processes, and organizations including the Cybersecurity and Infrastructure Security Agency, the Federal Bureau of Investigation, and the Office of the Director of National Intelligences Foreign Malign Influence Center exist to engage these communities, including through counter-intelligence education and defensive briefings. Nonetheless, it appears that these entities have not been fully activated to engage the range of key stakeholders in this space. For this reason, I would encourage you to pursue the contours of the strategy outlined in our pending bill.

Thank you for your Administrations important leadership in this area. I look forward to working with you to develop bipartisan legislation in this area.

###

Read more here:

Warner Calls on Biden Administration to Remain Engaged in AI ... - Senator Mark Warner

Advisory report begins integration of generative AI at U-M | The … – The University Record

A committee looking into how generative artificial intelligence affects University of Michigan students, faculty, researchers and staff has issued a report that attempts to lay a foundation for how U-M will live and work with this new technology.

Recommendations include:

The report is available to the public at a website created by the committee and Information and Technology Services to guide how faculty, staff and students can responsibly and effectively use GenAI in their daily lives.

U-M also has announced it will release its own suite of university-hosted GenAI services that are focused on providing safe and equitable access to AI tools for all members of the U-M community. They are expected to be released before students return to campus this fall.

GenAI is shifting paradigms in higher education, business, the arts and every aspect of our society. This report represents an important first step in U-Ms intention to serve as a global leader in fostering the responsible, ethical and equitable use of GenAI in our community and beyond, said Laurie McCauley, provost and executive vice president for academic affairs.

The report offers recommendations on everything from how instructors can effectively use GenAI in their classrooms to how students can protect themselves when using popular GenAI tools, such as ChatGPT, without exposing themselves to risks of sharing sensitive data.

More than anything, the intention of the report is to be a discussion starter, said Ravi Pendse, vice president for information technology and chief information officer. We have heard overwhelmingly from the university community that they needed some direction on how to work with GenAI, particularly before the fall semester started. We think this report and the accompanying website are a great start to some much-needed conversations.

McCauley and Pendse sponsored the creation of the Generative Artificial Intelligence Advisory Committee in May. Since then, the 18-member committee composed of faculty, staff and students from across all segments of U-M has worked together to provide vital insights into how GenAI technology could affect their communities.

Our goals were to present strategic directions and guidance on how GenAI can enhance the educational experience, enrich research capabilities, and bolster U-Ms leadership in this era of digital transformation, said committee chair Karthik Duraisamy, professor of aerospace engineering and of mechanical engineering, and director of the Michigan Institute for Computational Discovery and Engineering.

Committee members put in an enormous amount of work to identify the potential benefits of GenAI to the diverse missions of our university, while also shedding light on the opportunities and challenges of this rapidly evolving technology.

This is an exciting time, McCauley added. I am impressed by the work of this group of colleagues. Their report asks important questions and provides thoughtful guidance in a rapidly evolving area.

Pendse stressed the GenAI website will be constantly updated and will serve as a hub for the various discussions related to the topic across U-M.

We know that almost every group at U-M is having their own conversations about GenAI right now, Pendse said. With the release of this report and the website, we hope to create a knowledge hub where students, faculty and staff have one central location where they can come looking for advice. I am proud that U-M is serving both as a local and global leader when it comes to the use of GenAI.

Read the original here:

Advisory report begins integration of generative AI at U-M | The ... - The University Record