Media Search:



Johns Hopkins experts advise educators to embrace AI and ChatGPT – The Hub at Johns Hopkins

By Emily Gaines Buchler

Artificial intelligence (AI) chatbots like ChatGPT can solve math problems, draft computer code, write essays, and create digital artall in mere seconds. But the knowledge and information spewed by the large language models are not always accurate, making fact-checking a necessity for anyone using it.

Since its launch in November 2022 by OpenAI, ChatGPT has kicked off a flurry of both excitement and concern over its potential to change how students work and learn. Will AI-powered chatbots open doors to new ways of knowledge-building and problem solving? What about plagiarism and cheating? Can schools, educators, and families do anything to prepare?

To answer these and other questions, three experts from Johns Hopkins University came together on Sept. 19 for "Could AI Upend Education?", a virtual event open to the public and part of the Johns Hopkins Briefing Series. The experts included James Diamond, an assistant professor in the School of Education and faculty lead of Digital Age Learning and Educational Technology Programs; Daniel Khashabi, an assistant professor of computer science in the Whiting School of Engineering; and Thomas Rid, a professor of strategic studies in the School of Advanced International Studies and the director of the Alperovitch Institute for Cybersecurity Studies. Lanie Rutkow, vice provost for interdisciplinary initiatives and a professor of health policy and management in the Bloomberg School of Public Health, mediated the conversation.

Here are five takeaways from the discussion:

"The sudden introduction of any new technology into an educational setting, especially one as powerful as [a chatbot with AI], rightly raises concerns," Diamond says. " There are concerns about plagiarism and cheating, [and] a reduced effort among some learners to solve problems and build their own understandings. There are also real concerns about AI perpetuating existing biases and inaccuracies, as well as privacy concerns about the use of technology."

"ChatGPT is a superpower in the classroom, and like power in general, it can either be used for good or for bad," Rid said.

"If we look at human knowledge as an ocean, [then] artificial intelligence and large language models allow us to navigate the deep water more quickly, but as soon as we get close to the ground or shore, the training material in the model is shallow, [and the bot] will start to hallucinate, or make things up. So reliability is a huge problem, and we have to get across to students that they cannot trust the output and have to verify and fact-check."

"[With new and emerging generative AI,] there are some really powerful implications for personalized learning [and] easing work burdens," Diamond said. "There's the potential to foster deeper interest and topics among students. There's also the potential of using [these tools] to create new materials or generate draft materials that learners build off and [use to] explore new ways to be creative."

"You can [use various programs to] identify to what extent what portions of a particular generation [or, say, essay] have been provided by the [large language] model," Khashabi said. "But none of these are robots. None of them are 100% reliable. There are scenarios under which we can say that with some high degree of confidence something has been generated, but for the next few years, as a technologist, I would say, 'Don't count on those.'"

"Parents and caretakers can sit next to their kid and explore a technology like ChatGPT with curiosity, openness, and a sense of wonder, [so] their kids see these tools as something to explore and use [in an experimental way] to create," Diamond said.

"Educators can have discussions with students about what might compel a learner to cheat. [They] can start to develop their students' AI literacy to help them understand what the technology is, what it can and cannot do, and what they can do with it."

"It really is essential that all stakeholdersparents, students, classroom teachers, school administrators, policymakerscome together and have discussions about how this technology is going to get used," Diamond said. "If we don't do that, then we'll wind up in a situation where we have the technology dictating the terms."

Go here to read the rest:

Johns Hopkins experts advise educators to embrace AI and ChatGPT - The Hub at Johns Hopkins

Meet the AI Expert Using Machines to Drive Medical Advances – Penn Medicine

Csar de la Fuente, PhD

In an era peppered by breathless discussions about artificial intelligencepro and conit makes sense to feel uncertain, or at least want to slow down and get a better grasp of where this is all headed. Trusting machines to do things typically reserved for humans is a little fantastical, historically reserved for science fiction rather than science.

Not so much for Csar de la Fuente, PhD, the Presidential Assistant Professor in Psychiatry, Microbiology, Chemical and Biomolecular Engineering, and Bioengineering in Penns Perelman School of Medicine and School of Engineering and Applied Science. Driven by his transdisciplinary background, de la Fuente leads the Machine Biology Group at Penn: aimed at harnessing machines to drive biological and medical advances.

A newly minted National Academy of Medicine Emerging Leaders in Health and Medicine (ELHM) Scholar, among earning a host of other awards and honors (over 60), de la Fuente can sound almost diplomatic when describing the intersection of humanity, machines and medicine where he has made his wayensuring multiple functions work together in harmony.

Biology is complexity, right? You need chemistry, you need mathematics, physics and computer science, and principles and concepts from all these different areas, to try to begin to understand the complexity of biology, he said. That's how I became a scientist.

Since his earliest days, de la Fuente has been fascinated by what he calls the intricate wonders of biology. In his late teens, for his undergraduate degree, de la Fuente immersed himself in microbiology, physics, mathematics, statistics, and chemistry, equipping himself with the necessary tools to unravel those biological mysteries.

In his early twenties, determined to understand biology at a fundamental level, de la Fuente decided to pursue a PhD, relocating to Canada from Spain. Overcoming language and cultural barriers, he embraced the challenges and opportunities that lay before him, determined to become a scientist.

His PhD journey centered around programming and digitizing the fundamental workings of biological systems. He specialized in bacteria, the simplest living biological system, as well as proteins and peptides, the least programmable of biomolecules and the workhorses of biology that perform every task in lifeliterally, from moving your mouth while speaking, to blinking your eyes while reading this.

Although his research was successful, the landscape of using machines for biology remained uncharted. Upon completing his PhD, de la Fuente noted that technology (at the time) still did not exist to manipulate peptides in any programmable way. I felt dissatisfied with the available technologies for programming biology, which relied on slow, painstaking, and unpredictable trial-and-error experimentation. Biology remained elusive in terms of programmability.

De la Fuente was then recruited by MIT in 2015, at the time a leading home for AI research. However, AI had not yet been applied to biology or molecules. While computers were already adept at recognizing patterns in images and text, de la Fuente saw an opportunity to train computers for applications in biology, connecting the ability for computers to process the massive amounts of data that was becoming increasingly available.

His focus was to incorporate computational thinking into his work, essentially infusing AI into biologyparticularly to discover new antibiotics.

The motivation behind that is antibiotic resistance, de la Fuente said, adding that bacteria that have developed resistance to known antibiotics kill over one million people per year, projected to grow to 10 million deaths annually by 2050 as resistant strains spread. Making advances in this hugely disinvested area and coming up with solutions to this sort of critical problem has been a huge motivation for me and for our team.

The typical timeline for discovering antibiotics is three to six years using conventional methods, but de la Fuentes work in recent years has bucked that trend. With some of the algorithms that his group has developed, what used to take three to six years can now be done in days, or even hours. The potential antibiotic compounds they have identified need more evaluation before they are ready for clinical testing in humans. Even so, the accelerated rate of antibiotic discovery remains a point of pride for de la Fuentes lab.

This work launched the emerging field of AI for antibiotic discovery, following a pioneering study with his colleagues that led to the design of the first antibiotic using AI. That led de la Fuente to joining Penn as a Presidential Assistant Professor, a post he holds today. Since then, much of his work has focused on pioneering computational and experimental methods to search inside the human bodys own proteins for unknown but potentially useful molecules. By discovering them, his team could learn to manufacture them and use them as templates for antibiotic development.

In 2021, we performed the first ever exploration of the human proteomethe set of all proteins in the human bodyas a source of antibiotics, he said. We found them encoded in proteins of the immune system, but also in proteins from the nervous system and the cardiovascular system, digestive systemall throughout our body.

Just this summer, de la Fuente continued to derive antibiotic discovery from a curious source of inspiration that has been extinct for tens of thousands of years.

Recently, de la Fuentes team applied machine learning to explore the proteomes not just of living humans like us, but of extinct organisms (think: Neanderthals and Denisovans) to find potential new antibiotics, launching the field of what they call molecular de-extinction" and providing a new framework for thinking about drug discovery. But when asked about what he sees as the future of harnessing machines for human benefit, de la Fuente is remarkably honest when asked about what surprises him about his field.

I've been working in the antibiotics field for a long time, and it has become a sort of under-invested area of research. Sometimes it feels like theres only a couple of us out there doing this work, so it feels weird sometimes, he said. With remarkable advances in machine and artificial intelligence in the last half decade, any new support may not be human but machine.

That combination between machine intelligence and human ingenuity, I think, will be part of the future and were going to see a lot of meaningful and important research coming out from that intersection. I believe we are on the cusp of a new era in science where advances enabled by AI will help control antibiotic resistance, infectious disease outbreaks, and future pandemics.

Continue reading here:

Meet the AI Expert Using Machines to Drive Medical Advances - Penn Medicine

New Lockheed Martin system will manage satellite constellations … – Space.com

Lockheed Martin just announced a new "Operations Center of the Future," a new facility that the company hopes will make its growing constellations of Earth-orbiting satellites easier to manage.

Situated near Denver, this facility is a major innovation in satellite operations, company representatives said, with the capacity to handle multiple space missions at once through a web-enabled, secure cloud framework.

The operations center is fully funded by the company, and uses Lockheed's Compass Mission Planning and Horizon Command and Control software systems. These software platforms have already been put into service on over 50 spacecraft missions spanning government contract work to research and commercial ventures.

With this ground system incorporated into the new facility, the company says an individual operator could potentially oversee both individual satellites as well as entire heterogeneous satellite constellations of varying designs from virtually anywhere with an internet connection.

Related: Artificial intelligence could help hunt for life on Mars and other alien worlds

Maria Demaree, vice president and general manager at Lockheed Martin Space's National Security Space division, praised the facility's advanced technology in a Lockheed statement. "The Operations Center of the Future's next-generation AI, automation and cloud capabilities enable operators to remain closer to the mission than ever before, regardless of their physical location," Demaree said. "Remote operators can instantly receive timely mission alerts about satellite operations, and then securely log in to make smart, fast decisions from virtually anywhere."

The capability of the facility's ground system was on display earlier this year when it successfully flew Lockheed's In-space Upgrade Satellite System demonstrator, which was designed to highlight the potential for small satellites to maintain infrastructure in space and even enhance it with new functionality post-deployment.

A major feature of the center is its mix of automation, AI, and machine learning, which Lockheed says will help manage the rapidly increasing number (and complexity) of satellite constellations being deployed in an already crowded low Earth orbit.

The company also touted the facility's lean operations staff thanks to a flexible software framework that can be refactored and adjusted to suit different mission types and needs.

How well it does all that remains to be seen, obviously, and there's plenty of reason for skepticism at this point. With every industry making moves to incorporate AI and machine learning into their products and services, many companies with big AI plans have so far failed to demonstrate their real-world utility beyond the hype.

Lockheed Martin might very well have developed a system with minimal human interaction that can manage the maddeningly complex trajectories of tens of thousands of satellites in real time, and it would be quite a feat, if so. We might also end up with a very sophisticated version of ChatGPT in Mission Control making stuff up as it goes along, just with satellites flying through streams of space junk.

Whatever the case may be, we'll know soon enough, as Lockheed's Operations Center of the Future is expected to play a starring role in directing the company's forthcoming space missions, including Pony Express 2, TacSat, and the LM 400 on-orbit tech demonstration.

Continued here:

New Lockheed Martin system will manage satellite constellations ... - Space.com

Unleashing the power of AI to track animal behavior – Salk Institute

September 26, 2023

Salk scientists create GlowTrack to track human and animal behavior with better resolution and more versatility

LA JOLLAMovement offers a window into how the brain operates and controls the body. From clipboard-and-pen observation to modern artificial intelligence-based techniques, tracking human and animal movement has come a long way. Current cutting-edge methods utilize artificial intelligence to automatically track parts of the body as they move. However, training these models is still time-intensive and limited by the need for researchers to manually mark each body part hundreds to thousands of times.

Now, Associate Professor Eiman Azim and team have created GlowTrack, a non-invasive movement tracking method that uses fluorescent dye markers to train artificial intelligence. GlowTrack is robust, time-efficient, and high definitioncapable of tracking a single digit on a mouses paw or hundreds of landmarks on a human hand.

The technique, published in Nature Communicationson September 26, 2023, has applications spanning from biology to robotics to medicine and beyond.

Over the last several years, there has been a revolution in tracking behavior as powerful artificial intelligence tools have been brought into the laboratory, says Azim, senior author and holder of the William Scandling Developmental Chair. Our approach makes these tools more versatile, improving the ways we capture diverse movements in the laboratory. Better quantification of movement gives us better insight into how the brain controls behavior and could aid in the study of movement disorders like amyotrophic lateral sclerosis (ALS) and Parkinsons disease.

Current methods to capture animal movement often require researchers to manually and repeatedly mark body parts on a computer screena time-consuming process subject to human error and time constraints. Human annotation means that these methods can usually only be used in a narrow testing environment, since artificial intelligence models specialize to the limited amount of training data they receive. For example, if the light, orientation of the animals body, camera angle, or any number of other factors were to change, the model would no longer recognize the tracked body part.

To address these limitations, the researchers used fluorescent dye to label parts of the animal or human body. With these invisible fluorescent dye markers, an enormous amount of visually diverse data can be created quickly and fed into the artificial intelligence models without the need for human annotation. Once fed this robust data, these models can be used to track movements across a much more diverse set of environments and at a resolution that would be far more difficult to achieve with manual human labeling.

This opens the door for easier comparison of movement data between studies, as different laboratories can use the same models to track body movement across a variety of situations. According to Azim, comparison and reproducibility of experiments are essential in the process of scientific discovery.

Fluorescent dye markers were the perfect solution, says first author Daniel Butler, a Salk bioinformatics analyst. Like the invisible ink on a dollar bill that lights up only when you want it to, our fluorescent dye markers can be turned on and off in the blink of an eye, allowing us to generate a massive amount of training data.

In the future, the team is excited to support diverse applications of GlowTrack and pair its capabilities with other tracking tools that reconstruct movements in three dimensions, and with analysis approaches that can probe these vast movement datasets for patterns.

Our approach can benefit a host of fields that need more sensitive, reliable, and comprehensive tools to capture and quantify movement, says Azim. I am eager to see how other scientists and non-scientists adopt these methods, and what unique, unforeseen applications might arise.

Other authors include Alexander Keim and Shantanu Ray of Salk.

The work was supported by the UC San Diego CMG Training Program, a Jesse and Caryl Philips Foundation Award, the National Institutes of Health (R00NS088193, DP2NS105555, R01NS111479, RF1NS128898, and U19NS112959), the Searle Scholars Program, the Pew Charitable Trusts, and the McKnight Foundation.

DOI: https://doi.org/10.1038/s41467-023-41565-3

View original post here:

Unleashing the power of AI to track animal behavior - Salk Institute

Your Boss’s Spyware Could Train AI to Replace You – WIRED

David Autor, a professor of economics at MIT, says he also thinks AI could be trained in this way. While there is a lot of employee surveillance happening in the corporate world, and some of the data thats collected from it could be used to help train AI programs, simply learning from how people are interacting with AI tools throughout the workday could help train those programs to replace workers.

They will learn from the workflow in which theyre engaged, Autor says. Often people will be in the process of working with a tool, and the tool will be learning from that interaction.

Whether youre training an AI tool directly by interacting with it throughout the day, or the data youre producing while you work is simply being used to create an AI program that can do the work youre doing, there are multiple ways in which a worker could inadvertently end up training an AI program to replace them. Even if the program doesnt end up being incredibly effective, a lot of companies might be happy with an AI program thats good enough because it doesnt require a salary and benefits.

I think there are a lot of discretionary white-collar jobs where youre kind of using a mixture of hard information and soft information and trying to make advanced decisions, Autor says. People arent that good at that, machines arent that good at that, but probably machines can be pretty much as good as people.

Autor says he doesnt see a labor market apocalypse coming. Many workers wont be entirely replaced but will simply have their jobs changed by AI, Autor says, while some workers will certainly be made redundant by advancements in AI. The problem there, he says, is what happens to those workers after theyre no longer able to find a well-paying job with the education and skill sets they have.

Its not that were going to run out of work. Its much more that people are doing something theyre good at, and that thing goes away. And then they end up doing a kind of generic activity that everybodys good at, which means it pays very littlefood service, cleaning, security, vehicle driving, Autor says. These are low-paying activities.

Once someones automated out of a well-paying job, they can end up slipping through the cracks. Autor says weve seen this happen in the past.

The hollowing out of manufacturing and office work over the past 40 years has definitely put downward pressure on the wages of people who would do that type of work, and its not because theyre doing it now at a lower rate of pay. Its because theyre not doing it, Autor says.

Frey says politicians will need to offer solutions to those who fall through the cracks to prevent the destabilization of the economy and society. That would likely include offering social safety net programs to those affected. Frey has written extensively on the effects of the first Industrial Revolution, and he says there are lessons to be learned there. In Britain, for example, there was a program called the Poor Laws, where people who were harmed by automation were given financial relief.

See original here:

Your Boss's Spyware Could Train AI to Replace You - WIRED