Archive for the ‘Ai’ Category

Warner on AI regulation: ‘We probably can’t solve it all at once’ – POLITICO

Im very sensitive to the notion that on AI we shouldnt do that, he continued, but if we try to overreach, we may come up with goose eggs meaning nothing.

Congress sat on the regulatory sidelines throughout the rise of the internet and social media, only to later discover widespread concerns including data privacy, hate speech, election interference, misinformation and market dominance. Even after years of tense hearings and legislative proposals, Warner acknowledges our record in social media is a big fat zippo.

He worries lawmakers will suffer a similar fate with artificial intelligence by trying to mitigate its full spectrum of risks with a single law comprehensive legislation that others, including Sen. Todd Young (R-Ind.) also doubt is realistic. Instead, Warners been selling his colleagues on first tackling narrowly focused issues: the potential for AI-generated deepfakes to disrupt elections and financial markets.

Where Im at on the regulatory front is we probably cant solve it all at once, Warner said. But where are the two most immediate areas where AI could have an almost existential threat tomorrow?

He said hes considering new regulations that would perhaps address concerns about bias or require labels for AI-generated deepfakes though Warner said he has reservations about allowing companies to apply labels.

Hes also weighing an increase in penalties under existing laws when AI is used to undermine elections or markets. But who might pay those penalties when technology is abused the tech company or their users has been a sore point for Congress. A law created at the dawn of the internet, known as Section 230, has largely shielded tech companies from liability for their users actions.

Even the biggest advocates of Section 230, in my conversations with them up here on the Hill, have said they dont expect Section 230 to carry over to AI, Warner said.

A targeted bill would still struggle to clear a sharply divided Congress, especially one that deals with election security, Warner said. But he argues it stands a better chance than some of the more sweeping ideas being considered, including the notion of creating a federal agency to oversee AI. Warner said hes not against that idea, but with a Republican-controlled House, I wouldnt put all my eggs in that basket.

Attempts to regulate technology with ties to China, in particular the video-sharing app TikTok, offer another cautionary tale, Warner said. Legislation that Warner introduced earlier this year with Senate Minority Whip John Thune (R-S.D.) that would give the Commerce Department more oversight of foreign-owned tech firms, called the RESTRICT Act, S. 686 (118), was lining up senators two by two, like Noahs Ark and had the White Houses blessing before stalling amid political attacks earlier this year.

Warner said he has less anxiety today about China dominating AI than he did a year ago, though concerns remain about Beijing using the technology to advance its military and intelligence operations. But if Congress cannot come to a bipartisan agreement on how to combat national security concerns posed by Chinese technology, he said, then the prospect for comprehensive AI legislation looks grim.

Its so important, this is more on the politics side than the substance side, to at least show we can do something now, Warner said. Even if industry and other groups think thats all Congress will do, he added, I will take that risk because weve been so pathetic on social media. Weve got to show that we can actually put some markers down that have the force of law.

Annie Rees contributed to this report.

For a daily download on tech, politics and policy, subscribe to POLITICO Tech on Apple, Spotify, Google or wherever you get your podcasts.

Follow this link:

Warner on AI regulation: 'We probably can't solve it all at once' - POLITICO

New AI algorithm can detect signs of life with 90% accuracy. Scientists want to send it to Mars – Space.com

Can machines sniff out the presence of life on other planets? Well, to some extent, they already are.

Sensors onboard spacecraft exploring other worlds have the capability to detect molecules indicative of alien life. Yet, organic molecules that hint at intriguing biological processes are known to degrade over time, making their presence difficult for current technology to spot.

But now, a newly developed method based on artificial intelligence (AI) is capable of detecting subtle differences in molecular patterns that indicate biological signals even in samples hundreds of millions of years old. Better yet, the mechanism offers results with 90% accuracy, according to new research.

In the future, this AI system could be embedded in smarter sensors on robotic space explorers, including landers and rovers on the moon and Mars, as well as within spacecraft circling potentially habitable worlds like Enceladus and Europa.

"We began with the idea that the chemistry of life differs fundamentally from that of the inanimate world; that there are 'chemical rules of life' that influence the diversity and distribution of biomolecules," Robert Hazen, a scientist at the Carnegie Institution for Science in Washington D.C. and co-author of the new study, said in a statement. "If we could deduce those rules, we can use them to guide our efforts to model life's origins or to detect subtle signs of life on other worlds."

Related: NASA hopes humanoid robots can help us explore the moon and Mars

The new method relies on the premise that chemical processes that govern the formation and functioning of biomolecules differ fundamentally from those in abiotic molecules, in that biomolecules (like amino acids) hold on to information about the chemical processes that made them. This is likely to hold true for alien life, too, according to the new study.

On any world, life may produce and use higher quantities of a select few compounds to function on a daily basis. This would differentiate them from abiotic systems and it is these differences that can be spotted and quantified with AI, the researchers said in the statement.

The team first trained the machine learning algorithm with 134 samples, of which 59 were biotic and 75 were abiotic. Next, to validate the algorithm, the data was randomly split into a training set and a test set. The AI method successfully identified biotic samples from living things like shells, teeth, bones, rice, human hair as well as from ancient life preserved in certain fossilized fragments made of things like coal, oil and amber.

The tool also identified abiotic samples including chemicals like amino acids that were created in a lab as well as carbon-rich meteorites, according to the new study.

Almost immediately, the new AI method can be used to study the 3.5 billion-year-old rocks in the Pilbara region in Western Australia, where the world's oldest fossils are thought to exist. First found in 1993, these rocks were thought to be fossilized remains of microbes akin to cyanobacteria, which were the first living organisms to produce oxygen on Earth.

If confirmed, the bacteria's presence so early in Earth's history would mean the planet was friendly towards thriving life much earlier than previously thought. However, those findings have remained controversial, as research repeatedly pointed out that the evidence can also be due to pure geological processes having nothing to do with ancient life. Perhaps AI holds the answer.

This research is described in a paper published Monday (Sept. 25) in the journal Proceedings of the National Academy of Sciences.

Read the original:

New AI algorithm can detect signs of life with 90% accuracy. Scientists want to send it to Mars - Space.com

Johns Hopkins experts advise educators to embrace AI and ChatGPT – The Hub at Johns Hopkins

By Emily Gaines Buchler

Artificial intelligence (AI) chatbots like ChatGPT can solve math problems, draft computer code, write essays, and create digital artall in mere seconds. But the knowledge and information spewed by the large language models are not always accurate, making fact-checking a necessity for anyone using it.

Since its launch in November 2022 by OpenAI, ChatGPT has kicked off a flurry of both excitement and concern over its potential to change how students work and learn. Will AI-powered chatbots open doors to new ways of knowledge-building and problem solving? What about plagiarism and cheating? Can schools, educators, and families do anything to prepare?

To answer these and other questions, three experts from Johns Hopkins University came together on Sept. 19 for "Could AI Upend Education?", a virtual event open to the public and part of the Johns Hopkins Briefing Series. The experts included James Diamond, an assistant professor in the School of Education and faculty lead of Digital Age Learning and Educational Technology Programs; Daniel Khashabi, an assistant professor of computer science in the Whiting School of Engineering; and Thomas Rid, a professor of strategic studies in the School of Advanced International Studies and the director of the Alperovitch Institute for Cybersecurity Studies. Lanie Rutkow, vice provost for interdisciplinary initiatives and a professor of health policy and management in the Bloomberg School of Public Health, mediated the conversation.

Here are five takeaways from the discussion:

"The sudden introduction of any new technology into an educational setting, especially one as powerful as [a chatbot with AI], rightly raises concerns," Diamond says. " There are concerns about plagiarism and cheating, [and] a reduced effort among some learners to solve problems and build their own understandings. There are also real concerns about AI perpetuating existing biases and inaccuracies, as well as privacy concerns about the use of technology."

"ChatGPT is a superpower in the classroom, and like power in general, it can either be used for good or for bad," Rid said.

"If we look at human knowledge as an ocean, [then] artificial intelligence and large language models allow us to navigate the deep water more quickly, but as soon as we get close to the ground or shore, the training material in the model is shallow, [and the bot] will start to hallucinate, or make things up. So reliability is a huge problem, and we have to get across to students that they cannot trust the output and have to verify and fact-check."

"[With new and emerging generative AI,] there are some really powerful implications for personalized learning [and] easing work burdens," Diamond said. "There's the potential to foster deeper interest and topics among students. There's also the potential of using [these tools] to create new materials or generate draft materials that learners build off and [use to] explore new ways to be creative."

"You can [use various programs to] identify to what extent what portions of a particular generation [or, say, essay] have been provided by the [large language] model," Khashabi said. "But none of these are robots. None of them are 100% reliable. There are scenarios under which we can say that with some high degree of confidence something has been generated, but for the next few years, as a technologist, I would say, 'Don't count on those.'"

"Parents and caretakers can sit next to their kid and explore a technology like ChatGPT with curiosity, openness, and a sense of wonder, [so] their kids see these tools as something to explore and use [in an experimental way] to create," Diamond said.

"Educators can have discussions with students about what might compel a learner to cheat. [They] can start to develop their students' AI literacy to help them understand what the technology is, what it can and cannot do, and what they can do with it."

"It really is essential that all stakeholdersparents, students, classroom teachers, school administrators, policymakerscome together and have discussions about how this technology is going to get used," Diamond said. "If we don't do that, then we'll wind up in a situation where we have the technology dictating the terms."

Go here to read the rest:

Johns Hopkins experts advise educators to embrace AI and ChatGPT - The Hub at Johns Hopkins

Meet the AI Expert Using Machines to Drive Medical Advances – Penn Medicine

Csar de la Fuente, PhD

In an era peppered by breathless discussions about artificial intelligencepro and conit makes sense to feel uncertain, or at least want to slow down and get a better grasp of where this is all headed. Trusting machines to do things typically reserved for humans is a little fantastical, historically reserved for science fiction rather than science.

Not so much for Csar de la Fuente, PhD, the Presidential Assistant Professor in Psychiatry, Microbiology, Chemical and Biomolecular Engineering, and Bioengineering in Penns Perelman School of Medicine and School of Engineering and Applied Science. Driven by his transdisciplinary background, de la Fuente leads the Machine Biology Group at Penn: aimed at harnessing machines to drive biological and medical advances.

A newly minted National Academy of Medicine Emerging Leaders in Health and Medicine (ELHM) Scholar, among earning a host of other awards and honors (over 60), de la Fuente can sound almost diplomatic when describing the intersection of humanity, machines and medicine where he has made his wayensuring multiple functions work together in harmony.

Biology is complexity, right? You need chemistry, you need mathematics, physics and computer science, and principles and concepts from all these different areas, to try to begin to understand the complexity of biology, he said. That's how I became a scientist.

Since his earliest days, de la Fuente has been fascinated by what he calls the intricate wonders of biology. In his late teens, for his undergraduate degree, de la Fuente immersed himself in microbiology, physics, mathematics, statistics, and chemistry, equipping himself with the necessary tools to unravel those biological mysteries.

In his early twenties, determined to understand biology at a fundamental level, de la Fuente decided to pursue a PhD, relocating to Canada from Spain. Overcoming language and cultural barriers, he embraced the challenges and opportunities that lay before him, determined to become a scientist.

His PhD journey centered around programming and digitizing the fundamental workings of biological systems. He specialized in bacteria, the simplest living biological system, as well as proteins and peptides, the least programmable of biomolecules and the workhorses of biology that perform every task in lifeliterally, from moving your mouth while speaking, to blinking your eyes while reading this.

Although his research was successful, the landscape of using machines for biology remained uncharted. Upon completing his PhD, de la Fuente noted that technology (at the time) still did not exist to manipulate peptides in any programmable way. I felt dissatisfied with the available technologies for programming biology, which relied on slow, painstaking, and unpredictable trial-and-error experimentation. Biology remained elusive in terms of programmability.

De la Fuente was then recruited by MIT in 2015, at the time a leading home for AI research. However, AI had not yet been applied to biology or molecules. While computers were already adept at recognizing patterns in images and text, de la Fuente saw an opportunity to train computers for applications in biology, connecting the ability for computers to process the massive amounts of data that was becoming increasingly available.

His focus was to incorporate computational thinking into his work, essentially infusing AI into biologyparticularly to discover new antibiotics.

The motivation behind that is antibiotic resistance, de la Fuente said, adding that bacteria that have developed resistance to known antibiotics kill over one million people per year, projected to grow to 10 million deaths annually by 2050 as resistant strains spread. Making advances in this hugely disinvested area and coming up with solutions to this sort of critical problem has been a huge motivation for me and for our team.

The typical timeline for discovering antibiotics is three to six years using conventional methods, but de la Fuentes work in recent years has bucked that trend. With some of the algorithms that his group has developed, what used to take three to six years can now be done in days, or even hours. The potential antibiotic compounds they have identified need more evaluation before they are ready for clinical testing in humans. Even so, the accelerated rate of antibiotic discovery remains a point of pride for de la Fuentes lab.

This work launched the emerging field of AI for antibiotic discovery, following a pioneering study with his colleagues that led to the design of the first antibiotic using AI. That led de la Fuente to joining Penn as a Presidential Assistant Professor, a post he holds today. Since then, much of his work has focused on pioneering computational and experimental methods to search inside the human bodys own proteins for unknown but potentially useful molecules. By discovering them, his team could learn to manufacture them and use them as templates for antibiotic development.

In 2021, we performed the first ever exploration of the human proteomethe set of all proteins in the human bodyas a source of antibiotics, he said. We found them encoded in proteins of the immune system, but also in proteins from the nervous system and the cardiovascular system, digestive systemall throughout our body.

Just this summer, de la Fuente continued to derive antibiotic discovery from a curious source of inspiration that has been extinct for tens of thousands of years.

Recently, de la Fuentes team applied machine learning to explore the proteomes not just of living humans like us, but of extinct organisms (think: Neanderthals and Denisovans) to find potential new antibiotics, launching the field of what they call molecular de-extinction" and providing a new framework for thinking about drug discovery. But when asked about what he sees as the future of harnessing machines for human benefit, de la Fuente is remarkably honest when asked about what surprises him about his field.

I've been working in the antibiotics field for a long time, and it has become a sort of under-invested area of research. Sometimes it feels like theres only a couple of us out there doing this work, so it feels weird sometimes, he said. With remarkable advances in machine and artificial intelligence in the last half decade, any new support may not be human but machine.

That combination between machine intelligence and human ingenuity, I think, will be part of the future and were going to see a lot of meaningful and important research coming out from that intersection. I believe we are on the cusp of a new era in science where advances enabled by AI will help control antibiotic resistance, infectious disease outbreaks, and future pandemics.

Continue reading here:

Meet the AI Expert Using Machines to Drive Medical Advances - Penn Medicine

New Lockheed Martin system will manage satellite constellations … – Space.com

Lockheed Martin just announced a new "Operations Center of the Future," a new facility that the company hopes will make its growing constellations of Earth-orbiting satellites easier to manage.

Situated near Denver, this facility is a major innovation in satellite operations, company representatives said, with the capacity to handle multiple space missions at once through a web-enabled, secure cloud framework.

The operations center is fully funded by the company, and uses Lockheed's Compass Mission Planning and Horizon Command and Control software systems. These software platforms have already been put into service on over 50 spacecraft missions spanning government contract work to research and commercial ventures.

With this ground system incorporated into the new facility, the company says an individual operator could potentially oversee both individual satellites as well as entire heterogeneous satellite constellations of varying designs from virtually anywhere with an internet connection.

Related: Artificial intelligence could help hunt for life on Mars and other alien worlds

Maria Demaree, vice president and general manager at Lockheed Martin Space's National Security Space division, praised the facility's advanced technology in a Lockheed statement. "The Operations Center of the Future's next-generation AI, automation and cloud capabilities enable operators to remain closer to the mission than ever before, regardless of their physical location," Demaree said. "Remote operators can instantly receive timely mission alerts about satellite operations, and then securely log in to make smart, fast decisions from virtually anywhere."

The capability of the facility's ground system was on display earlier this year when it successfully flew Lockheed's In-space Upgrade Satellite System demonstrator, which was designed to highlight the potential for small satellites to maintain infrastructure in space and even enhance it with new functionality post-deployment.

A major feature of the center is its mix of automation, AI, and machine learning, which Lockheed says will help manage the rapidly increasing number (and complexity) of satellite constellations being deployed in an already crowded low Earth orbit.

The company also touted the facility's lean operations staff thanks to a flexible software framework that can be refactored and adjusted to suit different mission types and needs.

How well it does all that remains to be seen, obviously, and there's plenty of reason for skepticism at this point. With every industry making moves to incorporate AI and machine learning into their products and services, many companies with big AI plans have so far failed to demonstrate their real-world utility beyond the hype.

Lockheed Martin might very well have developed a system with minimal human interaction that can manage the maddeningly complex trajectories of tens of thousands of satellites in real time, and it would be quite a feat, if so. We might also end up with a very sophisticated version of ChatGPT in Mission Control making stuff up as it goes along, just with satellites flying through streams of space junk.

Whatever the case may be, we'll know soon enough, as Lockheed's Operations Center of the Future is expected to play a starring role in directing the company's forthcoming space missions, including Pony Express 2, TacSat, and the LM 400 on-orbit tech demonstration.

Continued here:

New Lockheed Martin system will manage satellite constellations ... - Space.com