Archive for the ‘Artificial Intelligence’ Category

Defining what’s ethical in artificial intelligence needs input from Africans – The Conversation CA

Artificial intelligence (AI) was once the stuff of science fiction. But its becoming widespread. It is used in mobile phone technology and motor vehicles. It powers tools for agriculture and healthcare.

But concerns have emerged about the accountability of AI and related technologies like machine learning. In December 2020 a computer scientist, Timnit Gebru, was fired from Googles Ethical AI team. She had previously raised the alarm about the social effects of bias in AI technologies. For instance, in a 2018 paper Gebru and another researcher, Joy Buolamwini, had showed how facial recognition software was less accurate in identifying women and people of colour than white men. Biases in training data can have far-reaching and unintended effects.

There is already a substantial body of research about ethics in AI. This highlights the importance of principles to ensure technologies do not simply worsen biases or even introduce new social harms. As the UNESCO draft recommendation on the ethics of AI states:

We need international and national policies and regulatory frameworks to ensure that these emerging technologies benefit humanity as a whole.

In recent years, many frameworks and guidelines have been created that identify objectives and priorities for ethical AI.

This is certainly a step in the right direction. But its also critical to look beyond technical solutions when addressing issues of bias or inclusivity. Biases can enter at the level of who frames the objectives and balances the priorities.

In a recent paper, we argue that inclusivity and diversity also need to be at the level of identifying values and defining frameworks of what counts as ethical AI in the first place. This is especially pertinent when considering the growth of AI research and machine learning across the African continent.

Research and development of AI and machine learning technologies is growing in African countries. Programmes such as Data Science Africa, Data Science Nigeria, and the Deep Learning Indaba with its satellite IndabaX events, which have so far been held in 27 different African countries, illustrate the interest and human investment in the fields.

The potential of AI and related technologies to promote opportunities for growth, development and democratisation in Africa is a key driver of this research.

Yet very few African voices have so far been involved in the international ethical frameworks that aim to guide the research. This might not be a problem if the principles and values in those frameworks have universal application. But its not clear that they do.

For instance, the European AI4People framework offers a synthesis of six other ethical frameworks. It identifies respect for autonomy as one of its key principles. This principle has been criticised within the applied ethical field of bioethics. It is seen as failing to do justice to the communitarian values common across Africa. These focus less on the individual and more on community, even requiring that exceptions are made to upholding such a principle to allow for effective interventions.

Challenges like these or even acknowledgement that there could be such challenges are largely absent from the discussions and frameworks for ethical AI.

Just like training data can entrench existing inequalities and injustices, so can failing to recognise the possibility of diverse sets of values that can vary across social, cultural and political contexts.

In addition, failing to take into account social, cultural and political contexts can mean that even a seemingly perfect ethical technical solution can be ineffective or misguided once implemented.

For machine learning to be effective at making useful predictions, any learning system needs access to training data. This involves samples of the data of interest: inputs in the form of multiple features or measurements, and outputs which are the labels scientists want to predict. In most cases, both these features and labels require human knowledge of the problem. But a failure to correctly account for the local context could result in underperforming systems.

For example, mobile phone call records have been used to estimate population sizes before and after disasters. However, vulnerable populations are less likely to have access to mobile devices. So, this kind of approach could yield results that arent useful.

Similarly, computer vision technologies for identifying different kinds of structures in an area will likely underperform where different construction materials are used. In both of these cases, as we and other colleagues discuss in another recent paper, not accounting for regional differences may have profound effects on anything from the delivery of disaster aid, to the performance of autonomous systems.

AI technologies must not simply worsen or incorporate the problematic aspects of current human societies.

Being sensitive to and inclusive of different contexts is vital for designing effective technical solutions. It is equally important not to assume that values are universal. Those developing AI need to start including people of different backgrounds: not just in the technical aspects of designing data sets and the like but also in defining the values that can be called upon to frame and set objectives and priorities.

Read the original here:
Defining what's ethical in artificial intelligence needs input from Africans - The Conversation CA

Artificial Intelligence, Machine Learning, and Biometric Security Technology will be Drivers of Digital Transformation in 2022 And Beyond: IEEE…

Published on November 25, 2021

Bengaluru IEEE, the worlds largest technical professional organization committed to advancing technology for humanity, today concluded its virtual roundtable focused on The Next Big Thing in Technology, the top technologies that will have a massive impact in 2022 and beyond. With the ongoing COVID-19 pandemic where digitization and technology have become increasingly powerful drivers for innovation, IEEE curated this roundtable to discuss how AI, ML, and advanced security mechanisms are fuelling industries to drastically increase productivity, automate systems to achieve better accuracy, and help workforces outperform while minimizing tedious repetitive tasks. AI-driven learning systems are generating more opportunities for intertwining technology trends which will only continue in 2022.

Speaking in the roundtable about The Impact of Technology in 2022, Sukanya Mandal, IEEE Member, and Founder and Data Science Professional explained, AI and ML are creating strides for technological advancements and will be extremely vital for our future to increase output, bring specialization into job roles, and increase the importance of human skills such as problem-solving, quantitative skills, and creativity. I strongly believe the future will consist of people and machines working together to improve and adapt to a modern way of working. AI will also play a critical role in all aspects of e-commerce, from customer experiences and marketing to fulfillment and distribution.

Recently published research on Artificial Intelligence and the Future of Work conducted by MIT Work of The Future, highlights that AI continues to push large-scale innovation, create more jobs, advance labor processes, and holds the immense potential to impact various sectors. Furthermore, a Gartner report predicts that half of data centers around the world will deploy advanced robotics with AI and ML capabilities by 2025, which is estimated to lead to 30% higher operating efficiencies.

Industry 4.0 is all about interconnecting machines, processes, and systems for maximum process optimization. Along the same lines, Industry 5.0 will be focused on the interaction between humans and machines. It is all about recognizing human expertise and creatively interconnecting with machine intelligence for process optimization. It is true to say that we are not far away from the 5th industrial revolution. Over this decade and the next, we will witness applications of IoT and smart systems adhering to the principles of the 5th industrial revolution across various sectors., she further added.

The roundtable also focused on Redefining the Future of Biometric Security Technology. AI-Machine Learning-based systems, in collaboration with the latest technologies such as IoT, Cloud Computing, and Data Science, have successfully advanced Biometrics. Biometric systems generate huge volumes of data that can be managed with Machine Learning techniques for better handling and space management. Deep learning can also play a vital role in analyzing data to build automated systems that achieve better accuracy. A report by Carnegie Endowment for International Peace stated that 75 countries, representing 43 percent of a total of 176 countries, are actively leveraging AI capabilities for biometric purposes, including facial recognition systems, smart cities, and others.

Commenting on this, Sambit Bakshi, Senior IEEE Member, said, During the pandemic, we all saw the increased use of technology in public places such as airports, train stations, etc., not only to monitor body temperatures but also to help maintain COVID protocols. Biometric technologies are rapidly becoming a part of the daily lives of people around the world.

Biometric authentication is likely to expand in the coming years. Multimodal authentication exercises a combination of similar biometric technologies to authenticate someone. Cues from different platforms can be integrated through cloud computing and IoT-based architecture to verify someones identity. These can include gait features or anthropometric signatures. The future of biometric security lies in simplicity. Improving modern techniques is the simplest way to offer a high level of protection.

Read more:
Artificial Intelligence, Machine Learning, and Biometric Security Technology will be Drivers of Digital Transformation in 2022 And Beyond: IEEE...

Artificial intelligence innovation among railway industry companies rebounded in the last quarter – Railway Technology

Research and innovation in artificial intelligence in the railway equipment supply, product and services sector has rebounded in the last quarter.

The most recent figures show that the number of AI patent applications in the industry stood at 12 in the three months ending September down from 16 over the same period last year.

Figures for patent grants related to AI followed a similar pattern to filings shrinking from 14 in the three months ending September last year to nine this year.

The figures are compiled by GlobalData, who track patent filings and grants from official offices around the world. Using textual analysis, as well as official patent classifications, these patents are grouped into key thematic areas, and linked to key companies across various industries.

AI is one of the key areas tracked by GlobalData. It has been identified as being a key disruptive force facing companies in the coming years, and is one of the areas that companies investing resources in now are expected to reap rewards from.

The figures also provide an insight into the largest innovators in the sector.

Uber Technologies Inc was the top artificial intelligence innovator in the railway equipment supply, product and services sector in the last quarter. The company, which has its headquarters in the United States, filed 27 AI related patents in the three months ending September. That was the same as 27 over the same period last year.

It was followed by the United States based United Parcel Service Inc with three AI patent applications, the United States based Westinghouse Air Brake Technologies Corp (3 applications), and the United States based JetBlue Airways Corp (3 applications).

By Michael Goodier

Trackside and On-Board Data Communication Systems

28 Aug 2020

More here:
Artificial intelligence innovation among railway industry companies rebounded in the last quarter - Railway Technology

[Webinar] Balancing Compliance with AI Solutions – How Artificial Intelligence Can Drive the Future of Work by Enabling Fair, Efficient, and Auditable…

December 7th, 2021

2:00 PM - 3:00 PM EDT

*Eligible for HRCI and SHRM recertification credits

With the expansion of Talent Acquisition responsibilities and complex landscape from hiring recovery, talent redeployment, the great resignation, and DE&I initiatives, there has never been a greater need for intelligent, augmentation and automation solutions for recruiters, managers, and sourcers. There is also growing awareness of problematic artificial intelligence solutions being used across the HR space and the perils of efficiency and effectiveness solutions at the cost of fairness and diversity goals. These concerns are compounded with increased inquiries from employees and candidates of the AI solutions used to determine or influence their careers, particularly whats inside the AI and how they are tested for bias. Join this one-hour webinar hosted by HiredScore CEO & Founder Athena Karp as she shares:

Speakers

Athena Karp

CEO & Founder @HiredScore

Athena Karp is the founder and CEO of HiredScore, an artificial intelligence HR technology company that powers the global Fortune 500. HiredScore leverages the power of data science and machine learning to help companies reach diversity and inclusion goals, adapt for the future of work, provide talent mobility and opportunity, and HR efficiencies. HiredScore has won best-in-class industry recognition and honors for delivering business value, accelerating HR transformations, and leading innovation around bias mitigation and ethical AI.

Read more:
[Webinar] Balancing Compliance with AI Solutions - How Artificial Intelligence Can Drive the Future of Work by Enabling Fair, Efficient, and Auditable...

The Future of Artificial Intelligence Autonomous Killing Machines: What You Need to Know About Military AI – SOFREP

Artificial intelligence, or AI, has created a lot of buzz, and rightfully so. Anyone remember Skynet? If so drop a comment. Ok, back to our regular programming. Military AI is no different. From self-driving vehicles to drone swarms, military AI will be used to increase the speed of operations and combat effectiveness. Lets look at the future of military AI including some ethical implications.

Military AI is a topic thats been around for a while. Those who know anything about military AI know that it has been around for years, just not talked about for reasons you can imagine. And it has been evolving.

These days Military AI has been helping with complex tasks such as target analysis and surveillance in combat.

Another great use for military AI in the future is to have it work with combat warfighters. AI could possibly be used for a tactical advantage because it would be able to predict an enemys next move before it happens. However, a good question to ask ourselves is, can Chinas AI outperform ours? Based on recent hacks by China on U.S. infrastructure this seems like a concern we should take very seriously.

Artificial intelligence (AI) is any machine or computer-generated intelligence that is intended to emulate the natural intelligence of humans. AI is generated by machines, but its avenues of application are limitless, and its no surprise that the military has taken an interest in this technology.

AI can be used to identify targets on the battlefield. Instead of relying on human intelligence, drones will be able to scan the battlefield and identify targets on their own.

This will help to reduce the number of warfighters on the battlefield, which will in turn save thousands of lives.

The future of Military AI is bright until it isnt. Lets be real, weve all seen the Terminator movies.

Military AI has the potential to increase combat effectiveness and reduce the workforce. It will be used to autonomously pilot vehicles, respond to threats in the air, and conduct reconnaissance and guide smart weapon systems. It will help with strategic planning and even provide assistance during ground combat.

Imagine for a second, the AI version of the disgruntled E-4!

AI is not only beneficial to military operations, it will also help with those boring jobs in logistics and supply chains. It can be used to predict demand for supplies and the most efficient routes for transport.

While there are many benefits of military AI, there are also potential risks. Some of these risks include military AI being hacked, weaponized, or misused in ways not intended by its creators. China or Putins Russia anyone?

Read Next: The Skyborg Program: The Air Forces new plan to give fighter pilots drone sidekicks

The future of Military AI is kind of fuzzy which could be good or bad.

In the near future, AI will be a part of military operations. It will be used in the field for combat and reconnaissance. In fact, AI-powered drones have been used in both battlefields and disaster zones, from Afghanistan to the Fukushima Nuclear Plant.

AI will be used in a variety of ways. It will be a part of combat operations, reconnaissance, and training. For example, AI can be used to build a 3D map of a combat zone. This would allow military personnel to plan their operations based on this map.

Another example of how AI can be used is in the training of new recruits. The military could use AI to simulate possible combat scenarios and determine which recruits are most likely to succeed in these scenarios. In this way, the military could train recruits using AI before deploying them to a combat zone, and we think thats pretty cool.

Perhaps surprisingly, AI may also be used in negotiation scenarios. Military negotiators could use AI to predict and prepare for negotiation outcomes and then use that data to plan their next steps in negotiations, such as predicting what response an opponent might have.

These are just some of the examples of how AI will be used in military operations in the future.

It is no secret that current warfare needs to be reconsidered. What weve done in the past isnt working. Afghanistan anyone? Bueller? With the emergence of new technologies, what we know about warfare needs to be reconsidered as well.

The U.S. Department of Defense has announced a major initiative to invest in artificial intelligence for a range of military operations from predicting the weather to detecting and tracking enemies.

It will have a huge impact on both the speed and combat effectiveness of operations, as well as the ethical implications of what we leave behind for future generations. Military AI is not an issue that will go away anytime soon. And as it becomes more prevalent, it will create a future that is quite different from what we know now.

AI will open up many possibilities for military operations in the future. For example, AI has the potential to take on tasks that are not human-safe. AI will be able to analyze data at a faster rate than humans, which will provide a tactical advantage.

If autonomous tanks are also developed, they could easily take over for soldiers on the ground in the same way that drones have taken over for pilots in the air.

AI can also be used to better coordinate drone swarms. The use of drones in the military has become more popular, and these robots can be used to take on many different tasks. For example, swarms of drones could be used to both attack and defend.

However, these advancements come with ethical implications. For example, autonomous weapons could potentially kill without human input. They could be used indiscriminately and quite possibly create more civilian casualties than conventional weapons.

So, what does this all mean? The future of military AI is unclear and may be full of ethical dilemmas. However, it seems like AI is here to stay and will continue to provide both benefits and hindrances.

The future of military AI is now. We are already seeing the effects of military AI in operations today. For example, Lockheed Martins Aegis system can control multiple air defense systems simultaneously. This means that the Aegis system can monitor more than 100 targets at one time.

However, AI will have a much more significant impact on the military in the near future. AI will have a profound effect on combat operations, logistics, and training. Combat operations will be faster and more precise because AI can handle complex tasks more quickly than humans. Logistics will be more efficient because AI systems will be able to better coordinate the transport of supplies. And training will be more effective because AI can provide personalized instruction to soldiers.

But it may not be too long before we see autonomous killing machines. Russian President Vladimir Putin has indicated an interest in developing robot fighting machines with artificial intelligence. Remember our previous Terminator comment? And other countries are developing autonomous lethal machines, too. Their names rhyme with Russia, and China

There are many ethical implications regarding the use of military AI. For example, there is the risk of AI taking control of military assets, like drones. If one AI-controlled drone gets hacked, it could cause mass destruction.

Another ethical issue is the use of autonomous weapons systems. Many people argue that these systems are immoral because they dont give soldiers the chance to defend themselves.

The use of AI in military operations will continue to grow in the coming years. Its important to keep in mind the ethical implications that come with this growth.

Technology always has a way of evolving and improving. Thats one of its best features. But not all innovation is good.

This means that AI will be used to fight wars, which is a cause for concern.

In the past, humans have had to make difficult decisions in times of war. But with AI, that decision could be made without the input of a human moral compass.

Thats why theres debate over whether or not there should be limits on what can be done with military AI. It usually comes down to two camps Elon Musks camp of, AI will destroy us. Then the more optimistic camp of Tony Robbins, AI will save us from ourselves.

An increasing number of people believe that AI should be regulated (Elon is one, and I tend to agree with him) and that there should be a ban on autonomous weapons. These arguments center on the idea that without a human in the decision-making process, there is no accountability. In fact, rewind that theres often no accountability within the current government. Afghanistan pullout anyone?

In light of these controversies, what does the future hold?

The future of military AI is unclear, but it will be a major force in future wars. There are ethical implications that we need to think about and try to regulate now before Skynet takes over and makes slaves of us all.

Veterans and active-duty military get a year of Fox Nation for free. Dont delay. Sign up today by clicking the button below!

If you enjoyed this article, please consider supporting our Veteran Editorial by becoming a SOFREP subscriber. Click here to get 3 months of full ad-free access for only $1 $29.97.

View post:
The Future of Artificial Intelligence Autonomous Killing Machines: What You Need to Know About Military AI - SOFREP