Media Search:



Elon Musk ‘can afford to include AM radio in his Teslas’: Democrats and Republicans agree AM should go in EVs – MarketWatch

Republican Sen. Ted Cruz of Texas, Democratic Sen. Ed Markey of Massachusetts and other U.S. lawmakers have rolled out a bill this week that would require car companies to have AM radio in their new vehicles, as the lawmakers aim to end the growing trend of electric vehicles getting made without that feature.

AM radio is a critical bulwark for democracy, providing a platform for alternative viewpoints and the ability for elected officials to share our efforts with our constituents, Cruz said in a statement Thursday.

Political pundit Sean Hannity made a similar point last month, saying leaving AM radio out of new EVs is a direct hit politically on conservative talk radio.

Markey, meanwhile, talked up the importance of the safety alerts that are broadcast over AM radio, echoing a point made by seven former Federal Emergency Management Agency administrators in a letter earlier this year.

For decades, free AM broadcast radio has been an essential tool in emergencies, a crucial part of our diverse media ecosystem and an irreplaceable source for news, weather, sports and entertainment for tens of millions of listeners, Markey said in a statement. Car makers shouldnt tune out AM radio in new vehicles or put it behind a costly digital paywall.

Some automakers have skipped having AM radio in their EVs, saying AM signals are subject to interference from those vehicles motors.

Markey said in March that eight car makers out of 20 that the senator contacted told him they have removed broadcast AM radio from their EVs. The eight were BMW BMW , Ford F , Mazda 7261 , Polestar PSNYW , Rivian RIVN , Tesla TSLA , Volkswagen VOW3 and Volvo VOLV.B .

Ford also plans to stop putting AM radio in most new gasoline-powered vehicles starting in 2024, according to a Detroit Free Press report.

Other backers of the new bill, called the AM for Every Vehicle Act, are Sens. Tammy Baldwin, a Wisconsin Democrat; Deb Fischer, a Nebraska Republican; Ben Ray Lujn, a New Mexico Democrat; Bob Menendez, a New Jersey Democrat; J.D. Vance, an Ohio Republican; and Roger Wicker, a Mississippi Republican.

The measure also has been rolled out in the U.S. House of Representatives, where its supporters include Reps. Josh Gottheimer, a New Jersey Democrat; Tom Kean, a New Jersey Republican; Rob Menendez, a New Jersey Democrat and son of Sen. Menendez; Bruce Westerman, an Arkansas Republican; and Marie Gluesenkamp Perez, a Democrat from Washington state.

I would think that if Elon Musk has enough money to buy Twitter and send rockets to space, he can afford to include AM radio in his Teslas, Gottheimer said in a statement, referring to the Tesla CEO who also leads SpaceX. Instead, Elon Musk and Tesla and other car manufacturers are putting public safety and emergency response at risk.

Tesla didnt respond to a request for comment, but a trade group for car makers criticized the AM for Every Vehicle Act.

Mandating AM radios in all vehicles is unnecessary. Congress has never mandated radio features in vehicles ever before, said the Alliance for Automotive Innovation in a statement.

Auto makers remain 100 percent committed to ensuring drivers have access to public alerts and safety warnings through the Integrated Public Alerts and Warning System (IPAWS) system, the industry group added, referring to a key FEMA system that the alliance said can distribute warnings in a number of ways, including by internet-based radio or satellite radio.

The point is this: whether or not AM radio is physically installed in vehicles in the future has no bearing on the various methods of delivering emergency communications that alert the public. This is simply a bill to prop up and give preference to a particular technology thats now competing with other communications options and adapting to changing listenership.

Original post:
Elon Musk 'can afford to include AM radio in his Teslas': Democrats and Republicans agree AM should go in EVs - MarketWatch

Politicians Need to Learn How AI WorksFast – WIRED

This week, US senatorsheard alarming testimony suggesting that unchecked AI couldsteal jobs,spread misinformation, and generally go quite wrong, in the words of OpenAI CEO Sam Altman (whatever that means). He and several lawmakers agreed that the US may now need a new federal agency to oversee the development of the technology. But the hearing also saw agreement that no one wants to kneecap a technology that could potentially increase productivity and give the US a lead in a new technological revolution.

Worried senators might consider talking toMissy Cummings, aonetime fighter pilot and engineering and robotics professor at George Mason University. She studies use of AI and automation in safety critical systems including cars and aircraft, and earlier this year returned to academia after a stint at the National Highway Traffic Safety Administration, whichoversees automotive technology, including Teslas Autopilot andself-driving cars.Cummings perspective might help politicians and policymakers trying to weigh the promise of much-hyped new algorithms with the risks that lay ahead.

Cummings told me this week that she left the NHTSA with a sense of profound concern about the autonomous systems that are being deployed by many car manufacturers. We're in serious trouble in terms of the capabilities of these cars, Cummings says. They're not even close to being as capable as people think they are.

I was struck by the parallels with ChatGPT and similar chatbots stoking excitement and concern about the power of AI. Automated driving features have been around for longer, but like large language models they rely on machine learning algorithms that are inherently unpredictable, hard to inspect, and require a different kind of engineering thinking to that of the past.

Also like ChatGPT, Teslas Autopilot and other autonomous driving projects have been elevated by absurd amounts of hype. Heady dreams of a transportation revolution led automakers, startups, and investors to pour huge sums into developing and deploying a technology thatstill has many unsolved problems. There was a permissive regulatory environment around autonomous cars in the mid-2010s, with government officials loath to apply brakes on a technology that promised to be worth billions for US businesses.

After billions spent on the technology, self-driving cars are stillbesetbyproblems, and some auto companies havepulled the plug on big autonomy projects. Meanwhile, as Cummings says, the public is often unclear about how capable semiautonomous technology really is.

In one sense, its good to see governments and lawmakers being quick to suggest regulation of generative AI tools and large language models. The current panic is centered on large language models and tools likeChatGPT that areremarkably good at answering questions and solving problems, even if they still have significant shortcomings, including confidently fabricating facts.

At this weeks Senate hearing, Altman of OpenAI, which gave us ChatGPT, went so far as to call for a licensing system to control whether companies like his are allowed to work on advanced AI.My worst fear is that wethe field, the technology, the industrycause significant harm to the world, Altman said during the hearing.

The rest is here:

Politicians Need to Learn How AI WorksFast - WIRED

How generative A.I. and low-code are speeding up innovation – CNBC

Oscar Wong | Moment | Getty Images

Independently, generative artificial intelligence and low-code software are two highly sought-after technologies. But experts say that together, the two harmonize in a way that accelerates innovation beyond the status quo.

Low-code development allows people to build applications with minimal need for hard code, instead using visual tools and other models to develop. While the intersection of low-code and AI feels natural, it's crucial to consider nuances like data integrity and security to ensure a meaningful integration.

Microsoft's Low-Code Signals 2023 report says 87% of chief innovation officers and IT professionals believe "increased AI and automation embedded into low-code platforms would help them better use the full set of capabilities."

According to Dinesh Varadharajan, CPO at low-code/no-code work platform Kissflow, the convergence of AI and low-code enables systems to manage the work rather than humans having to work for the systems.

Additionally, rather than the AI revolution replacing low-code, Varadharajan said, "One doesn't replace the other, but the power of two is going to bring a lot of possibilities."

Varadharajan notes that as AI and low-code technology come together, the development gap closes. Low-code software increases the accessibility of development across organizations (often to so-called citizen developers) while generative AI increases organizational efficiency and congruence.

According to Jim Rose, CEO of an automation platform for software delivery teams called CircleCI, these large language models that serve as the foundation of generative AI platforms will ultimately be able to change the language of low-code. Rather than building an app or website through a visual design format, Rose said, "What you'll be able to do is query the models themselves and say, for example, 'I need an easy-to-manage e-commerce shop to sell vintage shoes.'"

Rose agrees that the technology has not quite reached this point, in part because "you have to know how to talk" to generative AI to get what you're looking for. Kissflow's Varadharajan says he can see AI taking over task management within a year, and perhaps intersecting with low-code in a more meaningful way not long after.

Like anything involving AI, there are plenty of nuances that business leaders must take into account for successful implementation and iteration of AI-powered low-code.

Don Schuerman, CTO of enterprise software company Pega prioritizes what he calls "a responsible and ethical AI framework."

This includes the need for transparency. In other words, can you explain how and why AI is making a particular decision? Without that clarity, he says, companies can end up with a system that fails to serve end users in a fair and responsible way.

This melds with the need for bias testing, he added. "There are latent biases embedded in our society, which means there are latent biases embedded in our data," he said. "That means AI will pick up those biases unless we are explicitly testing and protecting against them."

Schuerman is a proponent of "keeping the human in the loop," not only for checking errors and making changes, but also to consider what machine learning algorithms have not yet mastered: customer empathy. By prioritizing customer empathy, organizations can maintain systems and recommend products and services actually relevant to the end user.

For Varadharajan, the biggest challenge he foresees with the convergence of AI and low-code is change management. Enterprise users, in particular, are used to working in a certain way, he says, which could make them the last segment to adopt the AI-powered low-code shift.

Whatever risks a company is dealing with, maintaining the governance layer is what will help leaders keep up with AI as it evolves. "Even now, we are still grappling with the possibilities of what generative AI can do," Varadharajan said. "As humans, we will also evolve. We will figure out ways to manage the risk."

While many generative AI platforms stem from open-source models, CircleCI's Rose says there's a successor of a different kind to come. "The next wave is closed-loop models that are trained against proprietary data," he said.

Proprietary data and closed-loop models will still have to reckon with the need for transparency, of course. Yet the ability for organizations to keep data secure in this small-model style could quickly shift the capacities of generative AI across industries.

Generative AI and low-code software puts innovation on a freeway, as long as organizations don't compromise on the responsibility factor, experts said. In the modern era, innovation speed is a must-have to be competitive. Just look at Bard, the Adobe-Google offering that is set to compete with OpenAI's ChatGPT in the generative AI space.

According to Scheurman, with AI and low-code, "I'm starting out further down the field than I did before." By shortening the path between an idea to experimentation and ultimately to a live product, he said AI-powered low-code accelerates the speed of innovation.

Link:

How generative A.I. and low-code are speeding up innovation - CNBC

Would you trust an AI doctor? New research shows patients are split – University of Arizona

Artificial intelligence-powered medical treatment options are on the rise and have the potential to improve diagnostic accuracy, but a new study led by University of Arizona Health Sciences researchers found that about 52% of participants would choose a human doctor rather than AI for diagnosis and treatment.

The paper, Diverse Patients Attitudes Towards Artificial Intelligence (AI) in Diagnosis, was published today in the journal PLOS Digital Health.

The research was led by Marvin J. Slepian, MD, JD, Regents Professor of Medicine at the UArizona College of Medicine Tucson and member of the BIO5 Institute, and Christopher Robertson, JD, professor of law and associate dean for strategic initiatives at Boston University. The research team found that most patients arent convinced the diagnoses provided by AI are as trustworthy of those delivered by human medical professionals.

While many patients appear resistant to the use of AI, accuracy of information, nudges and a listening patient experience may help increase acceptance, Dr. Slepian said of the studys other primary finding: that a human touch can help clinical practices use AI to their advantage and earn patients trust. To ensure that the benefits of AI are secured in clinical practice, future research on best methods of physician incorporation and patient decision making is required.

In the National Institutes of Health-funded study,participants were placed into scenarios as mock patients and asked whether they would prefer to have an AI system or a physical doctor for diagnosis and treatment, and under what circumstances.

In the first phase, researchers conducted structured interviews with actual patients, testing their reactions to current and future AI technologies. In the second phase of the study, researchers polled 2,472 participants across diverse ethnic, racial and socioeconomic groups using a blinded, randomized survey that tested eight variables.

Overall, participants were almost evenly split, with more than 52% choosing human doctors as a preference versus approximately 47% choosing an AI diagnostic method. If study participants were prompted that their primary care physicians felt AI was superior and helpful as an adjunct to diagnosis or otherwise nudged to consider AI as good, the acceptance of AI by study participants on re-questioning increased. This signaled the significance of the human physician in guiding a patients decision.

Disease severity leukemia versus sleep apnea did not affect participants trust in AI. Compared to white participants, Black participants selected AI less often and Native Americans selected it more often. Older participants were less likely to choose AI, as were those who self-identified as politically conservative or viewed religion as important.

The racial, ethnic and social disparities identified suggest that differing groups will warrant specific sensitivity and attention as to informing them as to the value and utility of AI to enhance diagnoses.

I really feel this study has the import for national reach. It will guide many future studies and clinical translational decisions even now, Dr. Slepian said. The onus will be on physicians and others in health care to ensure that information that resides in AI systems is accurate, and to continue to maintain and enhance the accuracy of AI systems as they will play an increasing role in the future of health care.

Co-authors include Andrew Woods, JD, Milton O. Riepe professor of law and co-director of the TechLaw program at the UArizona James E. Rogers College of Law; Kelly Bergstrand, PhD, associate professor of sociology and anthropology at the University of Texas at Arlington; Jess Findley, JD, PhD, professor of practice and director of bar and academic success at UArizona James E. Rogers College of Law; and Cayley Balser, JD, postgraduate at Innovation for Justice, housed at both the UArizona James E. Rogers College of Law and the University of Utah David Eccles School of Business.

This research was funded in part by the National Institutes of Health under award no. 3R25HL126140-05S1.

Go here to see the original:

Would you trust an AI doctor? New research shows patients are split - University of Arizona

Why AI’s diversity crisis matters, and how to tackle it – Nature.com

Inclusivity groups focus on promoting diverse builders for future artificial-intelligence projects.Credit: Shutterstock

Artificial intelligence (AI) is facing a diversity crisis. If it isnt addressed promptly, flaws in the working culture of AI will perpetuate biases that ooze into the resulting technologies, which will exclude and harm entire groups of people. On top of that, the resulting intelligence will be flawed, lacking varied social-emotional and cultural knowledge.

In a 2019 report from New York Universitys AI Now Institute, researchers noted that more than 80% of AI professors were men. Furthermore, Black individuals made up just 2.5% of Google employees and 4% of those working at Facebook and Microsoft. In addition, the report authors noted that the overwhelming focus on women in tech when discussing diversity issues in AI is too narrow and likely to privilege white women over others.

Some researchers are fighting for change, but theres also a culture of resistance to their efforts. Beneath this veneer of oh, AI is the future, and we have all these sparkly, nice things, both AI academia and AI industry are fundamentally conservative, says Sabine Weber, a scientific consultant at VDI/VDE Innovation + Technik, a technology consultancy headquartered in Berlin. AI in both sectors is dominated by mostly middle-aged white men from affluent backgrounds. They are really attached to the status quo, says Weber, who is a core organizer of the advocacy group Queer in AI. Nature spoke to five researchers who are spearheading efforts to change the status quo and make the AI ecosystem more equitable.

Senior data science manager at Shopify in Atlanta, Georgia, and a general chair of the 2023 Deep Learning Indaba conference.

I am originally from Ghana and did my masters in statistics at the University of Akron in Ohio in 2011. My background is in using machine learning to solve business problems in customer-experience management. I apply my analytics skills to build models that drive customer behaviour, such as customer-targeting recommendation systems, aspects of lead scoring the ranking of potential customers, prioritizing which ones to contact for different communications and things of that nature.

This year, Im also a general chair for the Deep Learning Indaba, a meeting of the African machine-learning and AI community that is held in a different African country every year. Last year, it was held in Tunisia. This year, it is taking place in Ghana in September.

Our organization is built for all of Africa. Last year, 52 countries participated. The goal is to have all 54 African countries represented. Deep Learning Indaba empowers each country to have a network of people driving things locally. We have the flagship event, which is the annual conference, and country-specific IndabaX events (think TED and TEDx talks).

During Ghanas IndabaX conferences, we train people in how to program and how to deal with different kinds of data. We also do workshops on what is happening in the industry outside of Ghana and how Ghana should be involved. IndabaX provides funding and recommends speakers who are established researchers working for companies such as Deep Mind, Microsoft and Google.

To strengthen machine learning and AI and inclusion in Ghana, we need to build capacity by training young researchers and students to understand the skill sets and preparation they need to excel in this field. The number one challenge we face is resources. Our economic status is such that the focus of the government and most Ghanaians is on peoples daily bread. Most Ghanaians are not even thinking about technological transformation. Many local academics dont have the expertise to teach the students, to really ground them in AI and machine learning.

Most of the algorithms and systems we use today were created by people outside Africa. Africas perspective is missing and, consequently, biases affect Africa. When we are doing image-related AI, there arent many African images available. African data points make up no more than 1% of most industry machine-learning data sets.

When it comes to self-driving cars, the US road network is nice and clean, but in Africa, the network is very bumpy, with a lot of holes. Theres no way that a self-driving car trained on US or UK roads could actually work in Africa. We also expect that using AI to help diagnose diseases will transform peoples lives. But this will not help Africa if people are not going there to collect data, and to understand African health care and related social-support systems, sicknesses and the environment people live in.

Today, African students in AI and machine learning must look for scholarships and leave their countries to study. I want to see this change and I hope to see Africans involved in decision-making, pioneering huge breakthroughs in machine learning and AI research.

Researchers outside Africa can support African AI by mentoring and collaborating with existing African efforts. For example, we have Ghana NLP, an initiative focused on building algorithms to translate English into more than three dozen Ghanaian languages. Global researchers volunteering to contribute their skill set to African-specific research will help with efforts like this. Deep Learning Indaba has a portal in which researchers can sign up to be mentors.

Maria Skoularidou has worked to improve accessibility at a major artificial-intelligence conference. Credit: Maria Skoularidou

PhD candidate in biostatistics at the University of Cambridge, UK, and founder and chair of {Dis}Ability in AI.

I founded {Dis}Ability in AI in 2018, because I realized that disabled people werent represented at conferences and it didnt feel right. I wanted to start such a movement so that conferences could be inclusive and accessible, and disabled people such as me could attend them.

That year, at NeurIPS the annual conference on Neural Information Processing Systems in Montreal, Canada, at least 4,000 people attended and I couldnt identify a single person who could be categorized as visibly disabled. Statistically, it doesnt add up to not have any disabled participants.

I also observed many accessibility issues. For example, I saw posters that were inconsiderate with respect to colour blindness. The place was so crowded that people who use assistive devices such as wheelchairs, white canes or service dogs wouldnt have had room to navigate the poster session. There were elevators, but for somebody with limited mobility, it would not have been easy to access all the session rooms, given the size of the venue. There were also no sign-language interpreters.

Since 2019, {Dis}Ability in AI has helped facilitate better accessibility at NeurIPS. There were interpreters, and closed captioning for people with hearing problems. There were volunteer escorts for people with impaired mobility or vision who requested help. There were hotline counsellors and silent rooms because large conferences can be overwhelming. The idea was: this is what we can provide now, but please reach out in case we are not considerate with respect to something, because we want to be ethical, fair, equal and honest. Disability is part of society, and it needs to be represented and included.

Many disabled researchers have shared their fears and concerns about the barriers they face in AI. Some have said that they wouldnt feel safe sharing details about their chronic illness, because if they did so, they might not get promoted, be treated equally, have the same opportunities as their peers, be given the same salary and so on. Other AI researchers who reached out to me had been bullied and felt that if they spoke up about their condition again, they could even lose their jobs.

People from marginalized groups need to be part of all the steps of the AI process. When disabled people are not included, the algorithms are trained without taking our community into account. If a sighted person closes their eyes, that does not make them understand what a blind person must deal with. We need to be part of these efforts.Being kind is one way that non-disabled researchers can make the field more inclusive. Non-disabled people could invite disabled people to give talks or be visiting researchers or collaborators. They need to interact with our community at a fair and equal level.

William Agnew is a computer science PhD candidate at the University of Washington in Seattle. Sabine Weber is a scientific consultant at VDI/VDE Innovation + Technik in Erfurt, Germany. They are organizers of the advocacy organization Queer in AI.

Agnew: I helped to organize the first Queer in AI workshop for NeurIPS in 2018. Fundamentally, the AI field doesnt take diversity and inclusion seriously. Every step of the way, efforts in these areas are underfunded and underappreciated. The field often protects harassers.

Most people doing the work in Queer in AI are graduate students, including me. You can ask, Why isnt it the senior professor? Why isnt it the vice-president of whatever? The lack of senior members limits our operation and what we have the resources to advocate for.

The things we advocate for are happening from the bottom up. We are asking for gender-neutral toilets; putting pronouns on conference registration badges, speaker biographies and in surveys; opportunities to run our queer-AI experiences survey, to collect demographics, experiences of harm and exclusion, and the needs of the queer AI community; and we are opposing extractive data policies. We, as a bunch of queer people who are marginalized by their queerness and who are the most junior people in our field, must advocate from those positions.

In our surveys, queer people consistently name the lack of community, support and peer groups as their biggest issues that might prevent them from continuing a career path in AI. One of our programmes gives scholarships to help people apply to graduate school, to cover the fees for applications, standardized admissions tests, such as the Graduate Record Examination (GRE) and university transcripts. Some people must fly to a different country to take the GRE. Its a huge barrier, especially for queer people, who are less likely to have financial support from their families and who experience repressive legal environments. For instance, US state legislatures are passing anti-trans and anti-queer laws affecting our membership.

In large part because of my work with Queer in AI, I switched from being a roboticist to being an ethicist. How queer peoples data are used, collected and misused is a big concern. Another concern is that machine learning is fundamentally about categorizing items and people and predicting outcomes on the basis of the past. These things are antithetical to the notion of queerness, where identity is fluid and often changes in important and big ways, and frequently throughout life. We push back and try to imagine machine-learning systems that dont repress queerness.

You might say: These models dont represent queerness. Well just fix them. But queer people have long been the targets of different forms of surveillance aimed at outing, controlling or suppressing us, and a model that understands queer people well can also surveil them better. We should avoid building technologies that entrench these harms, and work towards technologies that empower queer communities.

Weber: Previously, I worked as an engineer at a technology company. I said to my boss that I was the only person who was not a cisgender dude in the whole team of 60 or so developers. He replied, You were the only person who applied for your job who had the qualification. Its so hard to find qualified people.

But companies clearly arent looking very hard. To them it feels like: Were sitting on high. Everybody comes to us and offers themselves. Instead, companies could recruit people at queer organizations, at feminist organizations. Every university has a women in science, technology, engineering and mathematics (STEM) group or women in computing group that firms could easily go to.

But the thinking, Thats how we have always done it; dont rock the boat, is prevalent. Its frustrating. Actually, I really want to rock the boat, because the boat is stupid. Its such a disappointment to run up against these barriers.

Laura Montoya encourages those who, like herself, came to the field of artificial intelligence through a non-conventional route. Credit: Tim McMacken Jr (tim@accel.ai)

Executive director of the Accel.AI Institute and LatinX in AI in San Francisco, California.

In 2016, I started the Accel.AI Institute as an education company that helps under-represented or underserved people in AI. Now, its a non-profit organization with the mission of driving AI for social impact initiatives. I also co-founded the LatinX in AI programme, a professional body for people of Latin American background in the field. Im first generation in the United States, because my family emigrated from Colombia.

My background is in biology and physical science. I started my career as a software engineer, but conventional software engineering wasnt rewarding for me. Thats when I found the world of machine learning, data science and AI. I investigated the best way to learn about AI and machine learning without going to graduate school. Ive always been an alternative thinker.

I realized there was a need for alternative educational options for people like me, who dont take the typical route, who identify as women, who identify as people of colour, who want to pursue an alternative path for working with these tools and technologies.

Later on, while attending large AI and machine-learning conferences, I met others like myself, but we made up a small part of the population. I got together with these few friends to brainstorm, How can we change this?. Thats how LatinX in AI was born. Since 2018, weve launched research workshops at major conferences, and hosted our own call for papers in conjunction with NeurIPS.

We also have a three-month mentorship programme to address the brain drain resulting from researchers leaving Latin America for North America, Europe and Asia. More senior members of our community and even allies who are not LatinX can serve as mentors.

In 2022, we launched our supercomputer programme, because computational power is severely lacking in much of Latin America. For our pilot programme, to provide research access to high-performance computing resources at the Guadalajara campus of the Monterey Institute of Technology in Mexico, the technology company NVIDIA, based in Santa Clara, California, donated a DGX A100 system essentially a large server computer. The government agency for innovation in the Mexican state of Jalisco will host the system. Local researchers and students can share access to this hardware for research in AI and deep learning. We put out a global call for proposals for teams that include at least 50% Latinx members who want to use this hardware, without having to be enrolled at the institute or even be located in the Guadalajara region.

So far, eight teams have been selected to take part in the first cohort, working on projects that include autonomous driving applications for Latin America and monitoring tools for animal conservation. Each team gets access to one graphics processing unit, or GPU which is designed to handle complex graphics and visual-data processing tasks in parallel for the period of time they request. This will be an opportunity for cross-collaboration, for researchers to come together to solve big problems and use the technology for good.

See original here:

Why AI's diversity crisis matters, and how to tackle it - Nature.com