Archive for the ‘Artificial Intelligence’ Category

Artificial Intelligence and Machine Learning, 5G and IoT will be the Most Important Technologies in 2021, According to new IEEE Study – PRNewswire

PISCATAWAY, N.J., Nov. 19, 2020 /PRNewswire/ --IEEE, the world's largest technical professional organization dedicated to advancing technology for humanity, today released the results of a survey of Chief Information Officers (CIOs) and Chief Technology Officers (CTOs) in the U.S., U.K., China, India and Brazil regarding the most important technologies for 2021 overall, the impact of the COVID-19 pandemic on the speed of their technology adoption and the industries expected to be most impacted by technology in the year ahead.

2021 Most Important Technologies and ChallengesWhich will be the most important technologies in 2021? Among total respondents, nearly one-third (32%) say AI and machine learning, followed by 5G (20%) and IoT (14%).

Manufacturing (19%), healthcare (18%), financial services (15%) and education (13%) are the industries that most believe will be impacted by technology in 2021, according to CIOs and CTOS surveyed. At the same time, more than half (52%) of CIOs and CTOs see their biggest challenge in 2021 as dealing with aspects of COVID-19 recovery in relation to business operations. These challenges include a permanent hybrid remote and office work structure (22%), office and facilities reopenings and return (17%), and managing permanent remote working (13%). However, 11% said the agility to stop and start IT initiatives as this unpredictable environment continues will be their biggest challenge. Another 11% cited online security threats, including those related to remote workers, as the biggest challenge they see in 2021.

Technology Adoption, Acceleration and Disaster Preparedness due to COVID-19CIOs and CTOs surveyed have sped up adopting some technologies due to the pandemic:

The adoption of IoT (42%), augmented and virtual reality (35%) and video conferencing (35%) technologies have also been accelerated due to the global pandemic.

Compared to a year ago, CIOs and CTOs overwhelmingly (92%) believe their company is better prepared to respond to a potentially catastrophic interruption such as a data breach or natural disaster. What's more, of those who say they are better prepared, 58% strongly agree that COVID-19 accelerated their preparedness.

When asked which technologies will have the greatest impact on global COVID-19 recovery, one in four (25%) of those surveyed said AI and machine learning,

CybersecurityThe top two concerns for CIOs and CTOs when it comes to the cybersecurity of their organization are security issues related to the mobile workforce including employees bringing their own devices to work (37%) and ensuring the Internet of Things (IoT) is secure (35%). This is not surprising, since the number of connected devices such as smartphones, tablets, sensors, robots and drones is increasing dramatically.

Slightly more than one-third (34%) of CIO and CTO respondents said they can track and manage 26-50% of devices connected to their business, while 20% of those surveyed said they could track and manage 51-75% of connected devices.

About the Survey"The IEEE 2020 Global Survey of CIOs and CTOs" surveyed 350 CIOs or CTOs in the U.S., China, U.K., India and Brazil from September 21 - October 9, 2020.

About IEEEIEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. Through its highly cited publications, conferences, technology standards, and professional and educational activities, IEEE is the trusted voice in a wide variety of areas ranging from aerospace systems, computers, and telecommunications to biomedical engineering, electric power, and consumer electronics

SOURCE IEEE

https://www.ieee.org

Read this article:
Artificial Intelligence and Machine Learning, 5G and IoT will be the Most Important Technologies in 2021, According to new IEEE Study - PRNewswire

Joint Artificial Intelligence Center Has Substantially Grown To Aid The Warfighter – Department of Defense

It was just two years ago when the Joint Artificial Intelligence Center was created to grab the transformative potential of artificial intelligence technology for the benefit of America's national security, and it has grown substantially from humble beginnings.

Dana Deasy, the Defense Department's chief information officer, and Marine Corps Lt. Gen. Michael Groen, the director of the JAIC, virtually discussed from the Pentagon the growth and goals of JAIC at a FedTalks event during National AI Week.

''One of the things we've wanted to keep in our DNA is this idea that we want to hire a lot of diversity of thought into [JAIC],'' Deasy said, ''but yet do that in a way where that diversity of thought coalesces around a couple of really important themes.''

When JAIC began, it needed to grab hold of some projects that can show people that it can be nimble, agile, and it has the talent to give something that is meaningful back to the Defense Department, he noted.

So JAIC started in a variety of different places, Deasy said. ''But now as we've matured, we really need to focus on what was the core mission for JAIC. And that was, we have to figure out what the role is that AI plays in enabling the warfighter. And I've always said that JAIC should be central to any and all future discussions in that place,'' the CIO said.

''Transformation is our vision,'' Groen said.

''So, it's a big job. We discovered pretty quickly that seeding the environment with lots of small AI projects was not transformational in and of itself. We knew we had to do more. And so, what we're calling JAIC 2.0 is a focused transition in a couple of ways. [For example], we're going to continue to build AI products, because the talent in the JAIC is just superb,'' the JAIC director said.

Groen noted that the JAIC is thinking about solution spaces for a broad base of customers, which really gets it focused.

''There are, you know, the application, and the utilization of AI across the department [that] is very uneven. We have places that are really good. And there, some of the services are just doing fantastic things. And we have some places, large-scale enterprises with fantastic use cases [that] really could use AI, but they don't know where to start. So, we're going to shift from a transformational perspective to start looking at that broad base of customers and enable them,'' he said.

JAIC is going to continue to work with the military services on the cutting edge of AI and AI application, especially in the integration space, where JAIC is bringing together intelligence or intelligence of maneuver, Groen said, ''The warfighting functions have superb stovepipes. But now we need to bring those stovepipes together and integrate them through AI,'' he added.

We have to figure out what the role is that AI plays in enabling the warfighter. And I've always said that JAIC should be central to any and all future discussions in that place.''

The history books of the future will say JAIC was about joint common foundation, Deasy said. ''JAIC could never do all of the AI initiatives with the Department of Defense, nor was it ever created to do that. But what we did say was that people who are going to roll up [their] sleeves, and seriously start trying to leverage AI to help the warfighter every day. at the core of JAIC's success has got to be this joint common foundation,'' he noted.

Deasy noted that the JAIC was powerful and very real.

Into next year, he added, JAIC will have some basic services. And then it's a minimum viable product approach, where JAIC is building some basic services, a lot of native services from cloud providers, but then adding services to that.

''And where we hope to grow the technical platform is a place where people can bring their data, places where we can offer data services, data conditioning, maybe table data labeling and we can start curating data,'' Deasy projected. ''One of the things we'd really like to be able to do for the department is start cataloging and storing algorithms and data. So now we'll have an environment so we can share training data, for example, across programs.''

The modernized software foundation now gives JAIC a platform so it can build AI, Groen said, adding AI has to be a conscious application layer that's applied, leveraging the platform and the things that digital modernization provides.

''But when you think of it that way, holy cow, what a platform to operate from,'' he said.

So now JAIC will really have a have a place where the joint force can effectively operate, he said, adding that the JAIC can now start integrating intel in fires, intel in a maneuver command and control, the logistics enterprise, the combat logistics enterprise and sort of the broad support enterprise, Groen noted.

''You can't do any of that without a platform, and you can't do any of that without those digital modernization tenets,'' the JAIC director said.

If JAIC is going to have the whole force operating at the speed of machines, then it has to start bringing these artificial intelligence applications together into an ecosystem, Groen said, noting that it has to be a trusted ecosystem, meaning "we actually have to know, if we're going to bring data into a capability, we have to know that's good data."

''So how do we build an ecosystem so that we can know the provenance of data, and we can ensure that the algorithms are tested to set in a satisfactory way that we can comfortably and safely integrate data and decision making across warfighting functions,'' the JAIC director asked. ''That's the kind of stuff that I think it's really exciting, because that's the real transformation that we're after.''

See the original post:
Joint Artificial Intelligence Center Has Substantially Grown To Aid The Warfighter - Department of Defense

Artificial intelligence could be used to hack connected cars, drones warn security experts – ZDNet

Cyber criminals could exploit emerging technologies including artificial intelligence and machine learning to help conduct attacks against autonomous cars, drones and Internet of Things-connected vehicles, according to a report from the United Nations, Europol and cybersecurity company Trend Micro.

While AI and machine learning can bring "enormous benefits" to society, the same technologies can also bring a range of threats that can enhance current forms of crime or even lead to the evolution of new malicious activity.

"As AI applications start to make a major real-world impact, it's becoming clear that this will be a fundamental technology for our future," said Irakli Beridze, head of the Centre for AI and Robotics at the United Nations Interregional Crime and Justice Research Institute. "However, just as the benefits to society of AI are very real, so is the threat of malicious use," he added.

SEE:Cybersecurity: Let's get tactical(ZDNet/TechRepublic special feature) |Download the free PDF version(TechRepublic)

In addition to super-powering phishing, malware and ransomware attacks, the paper warns that by abusing machine learning, cyber criminals could conduct attacks that could have an impact on the physical world.

For example, machine learning is being implemented in autonomous vehicles to allow them to recognise the environment around them and obstacles that must be avoided such as pedestrians.

However, these algorithms are still evolving and it's possible that attackers could exploit them for malicious purposes, to aid crime or just to create chaos. For example, AI systems that manage autonomous vehicles and regular vehicle traffic could be manipulated by attackers if they gain access to the networks that control them.

By causing traffic delays perhaps even with the aid of using stolen credit card details to swamp a chosen area with hire cars cyber attackers could provide other criminals with extra time needed to carry out a robbery or other crime, while also getting away from the scene.

The report notes that as the number of automated vehicles on the roads increases, the potential attack surface also increases, so it's imperative that vulnerabilities and issues are considered sooner rather than later.

But it isn't just road vehicles that cyber criminals could exploit by exploiting new technologies and increased connectivity; there's the potential for attackers to abuse machine learning to impact airspace too.

Here, the paper suggests that autonomous drones could be of particular interest to cyber attackers both criminal or nation-state-backed because they have the potential to carry 'interesting' payloads like intellectual property.

Exploiting autonomous drones also provides cyber criminals with a potentially easy route to making money by hijacking delivery drones used by retailers and redirecting them to a new location taking the package and selling it on themselves.

Not only this, but there's the potential that a drone with a single board computer could also be exploited to collect Wi-Fi passwords or breach routers as it goes about its journeys, potentially allowing attackers access to networks and any sensitive data transferred using them.

SEE: 10 tech predictions that could mean huge changes ahead

And the report warns that these are just a handful of the potential issues that can arise from the use of new technology and the ways in which cyber criminals will attempt to exploit them.

"Cybercriminals have always been early adopters of the latest technology and AI is no different. As this report reveals, it is already being used for password guessing, CAPTCHA breaking and voice cloning, and there are many more malicious innovations in the works," said Martin Roesler, head of forward-looking threat research at Trend Micro

One of the reasons the UN, Europol and Trend Micro have released the report is in the hope that it'll be seen by technology companies and manufacturers and that they become aware of the potential dangers they could face and work to solve problems before they become a major issue.

Follow this link:
Artificial intelligence could be used to hack connected cars, drones warn security experts - ZDNet

Facebook using artificial intelligence to forecast COVID-19 spread in every US county – 10News

SAN DIEGO (KGTV) State officials hope Californias new 10 p.m. stay-at-home order will slow the spread of COVID-19, otherwise, another 10,000 San Diegans are projected to contract the virus in the next 10 days.

Thats according to a new county-by-county forecast from Facebook, which rolled out the prediction software last month.

Facebook projects L.A. County will see the second-largest increase in cases in the country by November 30. San Diego County is projected to add the 15th most cases, reaching a total of 78,594 infections by Nov. 30.

The two-week forecast was released before Governor Gavin Newsom announced enhanced restrictions. Facebook will release a new two-week forecast next week.

Many other forecasts around the world are only predicting caseload at a country or state level, said Laura McGorman with Facebooks Data for Good team. Were trying to be much more local in this approach because we know so much of the COVID-19 response is in fact local.

McGorman said the forecast tool could help county and state officials plan hospital bed space, ICU capacity, ventilators and other critical supplies.

The map is powered by artificial intelligence that draws on seven kinds of data. They use outside metrics like confirmed cases, doctor visits and the weather combined with information Facebook collects like a survey of peoples symptoms and GPS location data.

The location data helps gauge whether people are staying home and isolating or circulating among the community, according to McGorman.

Facebook only pulls data from users who opt in and no, theyre not reading your posts. All of the information, which is aggregated to the county or state level to protect privacy, is available for the public to download.

The tech giant initially started Data for Good three years ago to help with disaster-relief projects, McGorman said. Among other applications, their location data can quickly predict if evacuations are working.

Typically, groups like the Red Cross have to knock door-to-door to see if people are still home, or wait for people to show up at shelters to see if theyve gotten out of harms way, she said.

The software has also been used to detect network outages when a hurricane knocks out cellphone towers, she said.

Facebooks COVID-19 mapping tools have been used to inform policymakers in New York and Mexico, and the data has helped analyze the effectiveness of stay-at-home orders in California, McGorman said.

Link:
Facebook using artificial intelligence to forecast COVID-19 spread in every US county - 10News

Agencies advised to approach AI from an open, collaborative mindset – Federal News Network

In Zach Goldfines view, the fact that it was taking approximately 100 days for veterans to get word that help was coming to process their benefits claims let alone the time to actually receive help was unconscionable.

But that was the reality before the Department of Veterans Affairs launched its Content Classification Predictive Service Application Programming Interface last year. More than 1.5 million claims for disability compensation and benefits were submitted annually, and 65%-80% of them came via mail or fax. And 98.2% of attempts to automate the language in those claims were failing.

A veteran can write whatever, however they think about the injury they suffer. So they might say, My ear hurts and theres a ringing noise constantly but VA doesnt give a benefit for my ear hurts and I have ringing constantly it gives a benefit or gives a monthly payment for hearing loss, said Goldfine, deputy chief technology officer for Benefits at VA. So the problem was that veterans were facing an extra five-day delay in getting the decision on their benefits because there was this back-up of claims at that portion of the process where it required a person to make that translation.

After deploying the API, Goldfine said almost overnight the number of successfully automated claims tripled and it saved VA millions of dollars from reduced time needed to translate claims. This year VA once again reduced wait times with rapidly deployed chatbot to field questions about COVID-19, including what facilities were open.

This was one example of successful artificial intelligence at agencies highlighted during the Impact Summit Series: Artificial Intelligence, presented by the General Services Administrations Office of Technology Transformation Services on Thursday. Goldfine said the API case at VA illustrates how AI can make employees jobs easier, rather than render those employees obsolete a common fear of organizations and managers wary of implementing AI in their offices. Talk to staff and expect to hear different concerns between managers and lower-level workers doing these rote tasks.

We all have parts of our job that we dont like, whether that be like a million emails that we have to respond to or entering certain data elements when we need to take leave there are always monotonous parts of our jobs no matter what our job is and I would guess most folks wish we could automate those away when and where possible, he said. It was manual data reentry in a scenario where, many of them are veterans themselves. Theyd rather be spending their time doing things youd think theyd rather be spending their time doing, like talking with veterans on the phone, understanding whats going on, getting them information they need.

Goldfine also stressed building a multidisciplinary team to implement AI, get buy-in and consider the human impacts. System designers, product managers, user researchers, data scientists, software engineers and even policy experts are all critical.

Alka Patel, head of AI Ethics Policy for DoDs Joint Artificial Intelligence Center, brings a background of engineering and law to her role. She said good engineering principles of design, development, deployment and use are combined with consideration of risk management and government-corporate compliance.

Once DoD adopted its AI ethics principles in February after two years of work leading up to that Patels responsibility was to take those higher-level words and definitions and actually make them tactical.

When it comes to ethical AI, her advice to agencies was to start now and use any existing AI strategy as a framework. And although it may be impossible to predict every scenario or ethical quandary that can arise, some things will probably stay firm, like principles, while testing and evaluations processes are more susceptible to change as technology evolves.

Seeing ethics as an enabler of AI, rather than a hindrance, is the better mindset. In addition, simply stating in an award that contractors must comply with DoD AI ethics principles can help from a signaling standpoint, but Patel was skeptical that it will result in desired objectives.

Im very sensitive to dictating what those requirements need to be done from an agency perspective. I think thats a conversation that needs to happen mutually with our contractors or at least have some insight, she said. We need to be not so prescriptive but we need to be flexible but still have the fidelity of the content and the criteria we are expecting from the contractors themselves.

Read the rest here:
Agencies advised to approach AI from an open, collaborative mindset - Federal News Network