Archive for the ‘Artificial Intelligence’ Category

Google CEO: ‘Artificial intelligence needs to be regulated’ | TheHill – The Hill

Google CEO Sundar Pichaiis calling for governments around the world to regulate artificial intelligence, saying the sensitive technology should not be used to "support mass surveillance or violate human rights."

However, Pichai the top executive at Google as well as its parent company Alphabet also argued that governments should not go too far as they work to rein in high-stakes technologies like facial recognition and self-driving vehicles.

His speech inEurope and companion op-edcome as Europe weighs newethics rules for artificial intelligence and the White House urges a light-touch approach to regulating technology.

"There is no question in my mind that artificial intelligence needs to be regulated," Pichai wrote in theFinancial Times. "It is too important not to. The only question is how to approach it."

Since 2018 Google has touted its AI principles as a potential framework for government regulation. The guidelines urge tech companies to ensure artificial intelligence technologies incorporateprivacy features, contribute to the greater social good and do not reflect "unfair" human biases.

Critics have pushed back onthe tech industry's stated support for AI regulation,claiming the companies are trying to dictate the terms of regulation in their own favor.

"Sensible regulation must also take a proportionate approach, balancing potential harms, especially in high-risk areas, with social opportunities," Pichai wrote.

Governments around the world have found themselves behind the curve as artificial intelligence advances at lightning speed, opening up new frontiers for potential regulation. Several cities in the U.S. have taken the lead by imposing all-out bans on facial recognition technology,which oftenmisidentifies people of color at higher rates.

Pichai has thrown his support behind a temporary ban onfacial recognition technology, which he says can be used for "nefarious" purposes.

"I think it is important that governments and regulations tackle it sooner rather than later and give a framework for it, Pichai said at a conference in Brussels this week.It can be immediate, but maybe theres a waiting period before we really think about how its being used. ... Its up to governments to chart the course.

Microsoft has also released its own ideas around how to regulatefacial recognition tech, and says it abides by a strict set of AI ethics standards.

In 2018, Pichai spent his speech in Davos, Switzerland, toutingthe enormous potential of artificial intelligence, presenting a rosier view of the technologybefore it experienced an intense backlash over the past several years.

Now, as Europe and the U.S. creep closer to instituting rules around many of the products that Google creates, Pichai is raising his voice around what he sees as the best approach to AI.

"Googles role starts with recognizing the need for a principled and regulated approach to applying AI, but it doesnt end there," Pichai wrote. "We want to be a helpful and engaged partner to regulators as they grapple with the inevitable tensions and trade-offs. We offer our expertise, experience and tools as we navigate these issues together."

Read the original post:
Google CEO: 'Artificial intelligence needs to be regulated' | TheHill - The Hill

How the Pentagon’s JAIC Picks Its Artificial Intelligence-Driven Projects – Nextgov

The Pentagon launched its Joint Artificial Intelligence Center in 2018 to strategically unify and accelerate AI applications across the nations defense and military enterprise. Insiders at the center have now spent about nine months executing that defense driven AI-support.

At an ACT-IAC forum in Washington Wednesday, Rachael Martin, the JAICs mission chief of Intelligent Business Automation Augmentation and Analytics, highlighted insiders early approach to automation and innovation.

Our mission is to transform the [Defense] business process through AI technologies, to improve efficiency and accuracybut really to do all those things so that we can improve our overall warfighter support, Martin said.

Within her specific mission area, Martin and the team explore and develop automated applications that support a range of efforts across the Pentagon, such as business administration, human capital management, acquisitions, finance and budget training, and beyond. Because the enterprise is vast, the center is selective in determining the projects and programs best fit to be taken under its wing.

For the JAIC, there are a couple of key principles that we want to go by, or that we're going to adhere to when we're looking at a project and whether we support it, Martin explained.

The first principle to be evaluated is mission impact. In this review, insiders pose questions like who cares? she said. They assess the user-base that would most benefit from the project and what the ultimate outcome would be across Defense if the JAIC opted to support it. Next, according to Martin, officials review data-readiness. In this light, insiders address factors like where the data to be used is storedand whether its actually prepped for AI, or more advanced analysis and modeling to run on top of it.

The third factor thats assessed is technology maturity. Martin said that contrary to what many seem to think, the JAIC is not a research organization but instead seeks to apply and accelerate their adoption of already-existing solutions across the department and where those improvements are needed most. Insiders are therefore not at all interested in spending heaps of time researching new, emerging AI and automation applications. Instead, they aim to identify what already exists and is ready to be deployed at this moment.

So that's a big one for us that we like to emphasize, Martin said.

The final assessment is whether the JAIC can identify Defense insiders who will actually use whatever they are set to build. When developing something new, Martin said insiders want to those itll eventually touch to weigh in on the development every step of the way.

We're not in the business of coming up with good ideas and then creating something and trying to hoist it on somebody else, Martin said. We really believe in a very user-centric approach.

Excerpt from:
How the Pentagon's JAIC Picks Its Artificial Intelligence-Driven Projects - Nextgov

The World Economic Forum Jumps On the Artificial Intelligence Bandwagon – Forbes

Sergey Tarasov - stock.adobe.com

Last Friday, the World Economic Forum (WEF) sent out a press announcement about an artificial intelligence (AI) toolkit for corporate boards. The release pointed to a section of their web site titled Empowering AI Leadership. For some reason, at this writing, there is no obvious link to the toolkit, but the press team was quick to provide the link. It is well laid out in linked we pages, and some well-produced pdfs are available for download. For purposes of this article, I have only looked at the overview and the ethics section, so here are my initial impressions.

As would be expected from an organization focused on a select few in the world, the AI toolkit is high level. Boards of directors have broad but shallow oversight over companies, so there is no need to focus on details. Still, it is wished that a bit more accuracy had been involved.

The description of AI is very nice. There are many definitions and, as Ive repeatedly pointed out, the meaning of AI and of machine learning (ML) continue to both be changing and to have different meanings to many. The problem in the setup is one that many people miss about ML. In the introductory module, the WEF claims The breakthrough came in recent years, when computer scientists adopted a practical way to build systems that can learn. They support that with a link to an article that gets it wrong. The breakthrough mentioned in the article, the level of accuracy in an ML system, is far more driven by a non-AI breakthrough than a specific ML model.

When we studied AI in the 1980s, deep learning was known and models existed. What we couldnt do is run them. Hardware and operating systems didnt support the needed algorithms and the data volumes that were required to train them. Cloud computing is the real AI breakthrough. The ability to link multiple processors and computers in an efficient and larger virtual machine is what has powered the last decades growth of AI.

I was also amused about with list of core AI techniques where deep learning and neural networks are listed at the same level as the learning methods used to train them. Im only amused, not upset, because boards dont need to know the difference to start, but its important to introduce them to the terms. I did glance at the glossary, and its a very nice set of high-level definitions of some of these so interested board members can get some clarification.

On a quick tangent, their definition of bias is well done, as only a few short sentences reference both the real world issue of bias and the danger of bias within an AI system.

Ethics are an important component (in theory) to the management of companies. The WEF points out at the beginning of that module that technology companies, professional associations, government agencies, NGOs and academic groups have already developed many AI codes of ethics and professional conduct. The statement reminds me of the saying that standards are so important that everyone wants one of their own. The module then goes on to discuss a few of the issue of the different standards.

Where I differ from the WEF should be no surprise. This section strongly minimized governmental regulation. Its all up to the brave and ethical company. As Zuckerbergs decision that Facebook will allow lies in political advertisements as long as it makes the firm and himself wealthier, it is clear that governments must be more active in setting guidelines on technical companies, both in at large and within the AI arena. Two years ago, I discussed how the FDA is looking at how to handle machine learning. Governments move slowly, but they move. Its clear that companies need to be more aware of the changing regulatory environment. Ethical companies should be involved in both helping governments set reasonable regulations, ones that protect consumers as well as companies, and should be preparing systems, in advance, to match where they think a proper regulatory environment will evolve.

The WEFs Davos meetings are, regardless of my own personal cynicism about them, where government and business leaders meet to discuss critical economic issues. Its great to see the WEF taking a strong look at AI and then presenting what looks like a very good, introductory, toolkit for boards of directors, but the need for strong ethical positions means that more is needed. It will be interesting to see how their positioning advances over the next couple of years. F

Go here to read the rest:
The World Economic Forum Jumps On the Artificial Intelligence Bandwagon - Forbes

7 new ways golf instruction is embracing artificial intelligence and innovative technology – Golf Digest

ORLANDO -- Though golf has a tendency to move slower than most industries, the technology innovations we've seen this week beg to differ. Artificial intelligence and robotics have been terms perhaps thrown around in the past, implemented by only the biggest companies, but now we're actually seeing the results of intense research and development. And that's especially true in the golf instruction realm, where lessons can have so much added value with the right set of data and smart products.

There were too many items to say this is a definitive list. But this is at least what caught our eye at the 2020 PGA Merchandise Show in the ever-expanding tech/instruction space.

1 . Hack Motion wrist sensor. Teachers (and students) want the ability to capture swing data and analyze it immediately. The biofeedback from Hack Motion, a wrist-motion training system, is synced to your tablet or smartphone (or computer). It tracks your wrist movement in real time, measuring wrist flexion, extension and rotation via a device you wear like a watch, in addition to a Velcroed strap you wear around your forefingers. After a quick minute calibration, the system captures any swing you make and delivers the data to the app, where you can study it. There's also access to tour-player data the company has captured over the past two-plus years. A Latvian-based company, CEO Atis Hermanas says the company has sold units to more than 40 countries over the past two years. It had leading coaches David Orr, James Leitz, Brian Manzella and others speaking to the benefits of the technology at the PGA Show. With audio feedback and seven hours of battery life, there's a lot to like about Hack Motion.

2 . TrackMan's new A.I. technology (Tracy). TrackMan's continued iterations on its existing technology will be fun to follow. It will continue to innovate on its launch-monitor technology to expand its offerings, including its simulator business. Perhaps most impressive is its new Tracy technology, which TrackMan unveiled at the PGA Show, with a soft June 1 launch. Tracy, adapted from tracer, is a mode you can turn on and off, which will recommend what you should work on based on a minimum of six shots on a TrackMan device. It will audibly communicate with you (if you want it to), with voice commands that ask game-analysis questions, and it will make recommendations based on the (estimated) 500-million-plus shots the company has collected around the world. As the company says, it pinpoints what you need to work, not how to work on itencouraging you to seek the advice of an instructor.

3 . V1 + BodyTrak partnership. Applying ground force in the golf swing with proper sequencing is one of the hot instruction applications these days, as we continue to study how tour pros apply such force to their tee shots. A partnership between BodiTrak, which measures vertical force and velocity through a portable pressure matand V1 Sports, one of the first to penetrate the instruction/app spacehopes to deliver a complete way to capture data and study the kinematic sequencing. The package goes for $3,500.

RELATED: PGA Show 2020: Five affordable new launch monitors geared for the everyman

4 . K-Motion's Smart Tiles. You've likely seen or heard about K-Motion's K-Vest motion-capture technology, which allows 3-D swing data to be captured and analyzed. With the system's new Smart Tiles, debuting at the PGA Show, a player's wrist and body movements are immediately captured and stored in the system's cloud-based improvement panel, allowing your teacher to provide feedback immediately. Color-coded cues also make it easy to understand which area of your swing you need the most help, along with auditory feedback to specify the positions you need to get into. (K-Player, the individual, consumer version of the technology goes for $2,495. And K-Coach, which includes the Smart Tiles, is $5,495.)

RELATED: Our best golf instruction: Jack Nicklaus' best tips

5. Dragonfly. A new player in this space, Guided Knowledgea British based companyhas introduced a smart suit with 18 sensors that a golfer can wear underneath their golf clothes and take it on the course. 3-D data is captured in real time as you play your round, and it's viewable via a remote coaching app with hundreds of performance metrics. Instead of being tied to the range or a pro's teaching facility, you can play your round and have sensors, from head to toe, measure your movements for improvement. "Players no longer have to be in a lab or a teacher's facility," says Jon Dalzell, chief science officer of Guided Knowledge. "What used to be an appointment is now available anywhere, anytime."

RELATED: You can now rent (or own) your own robotic putting simulator unit (if you have the money)

6 . Uneekor Eye XO launch monitor. The incredible explosion of simulators and launch monitors within golf has been fun to follow. The new Eye XO from Uneekor is a little different. It's an overhead launch monitor with non-marking ball technology, which will work outsidea convenience for some. With two cameras, down the line and face-on capturing 200 frames per second, in addition to a stereographic lens overhead, the system is able to provide a very crisp, sharp video of the golfer making contact with the ball, with video from each angle showing the ball traveling off the face, without any pixelation. Doug Bybee, with experience over 25 years within the industry as a fitter with Mizuno and Cobra among others, says he whiteboarded the concept for the company's new product just two days before his team in South Korea developed a software and cloud solution. He unveiled the prototype at the PGA Show, with the product marked for a June 1 launch ($10-12,000 for the complete program, or $1,250 for the pair of straight cameras.

RELATED: A GPS and a speaker all in one: Bushnell unveiled its innovative product this week

7 . U.S. Kids teaching app. One of the leaders in youth golf, U.S. Kids unveiled a new app this week that will allow instructors to organize and track kids' progress to deliver feedback to the player and coach. The digital Player Pathway scoreboard includes color-coded levels for each player, too, for easier sorting and tracking. Just like the other items above, it's making life easier on the teacher, so they can be more efficient with their time, and help more players ... a win for all.

RELATED: Golf instruction truths: The one move you need to make better iron contact

WATCH: GOLF DIGEST VIDEOS

See the original post:
7 new ways golf instruction is embracing artificial intelligence and innovative technology - Golf Digest

The Ethical Upside to Artificial Intelligence – War on the Rocks

According to some, artificial intelligence (AI) is thenew electricity. Like electricity, AI will transform every major industry and open new opportunities that were never possible. However, unlike electricity, the ethics surrounding the development and use of AI remain controversial, which is a significant element constraining AIs full potential.

The Defense Innovation Board (DIB) released a paper in October 2019 that recommends the ethical use of AI within the Defense Department. It described five principles of ethically used AI responsible, equitable, traceable, reliable, and governable. The paper also identifies measures the Joint Artificial Intelligence Center, Defense Agency Research Projects Agency (DARPA), and U.S. military branches are taking to study the ethical, moral, and legal implications of employing AI. While the paper primarily focused on the ethics surrounding the implementation and use of AI, it also argued that AI must have the ability to detect and avoid unintended harm. This article seeks to expand on that idea by exploring AIs ability to operate within the Defense Department using an ethical framework.

Designing an ethical framework a set of principles that guide ethical choice for AI, while difficult, offers a significant upside for the U.S. military. It can strengthen the militarys shared moral system, enhance ethical considerations, and increase the speed of decision-making in a manner that provides decision superiority over adversaries.

AI Is Limited without an Ethical Framework

Technology is increasing the complexity and speed of war. AI, the use of computers to perform tasks normally requiring human intelligence, can be a means of speeding decision-making. Yet, due to a fear of machines inability to consider ethics in decisions, organizations are limiting AIs scope to focus ondata-supported decision-making using AI to summarize data while keeping human judgment as the central processor. For example, leaders within the automotive industry received backlash for programming self-driving cars to make ethical judgments. Some professional driving organizations have demanded that these cars be banned from the roads for at least 50 years.

This backlash, while understandable, misses the substantial upside that AI can offer to ethical decision-making. AI reflectshuman inputand operates on human-designed algorithms that set parameters for the collection and correlation of data to facilitate machine learning. As a result, it is possible to build an ethical framework that reflects a decision-makers values. Of course, when the data that humans supply is biased, for example, AI can mimic its trainers bydiscriminating on gender and race. Biased algorithms, to be sure, are a drawback. However, bias can be mitigated by techniques such as counterfactual fairness, Google AIs recommended practices, and algorithms such as those provided by IBMs AI Fairness 360 toolkit. Moreover, AI processing power makes it essential for successfully navigating ethical dilemmas in a military setting, where complexity and time pressure often obscure underlying ethical tensions.

A significant obstacle to building an ethical framework for AI is a fundamental element of war the trade-off between human lives and other military objectives. While international humanitarian law provides a codification of actions, many of which have ethical implications, it does not answer all questions related to combat. It primarily focuses on defining combatants, the treatment of combatants and non-combatants, and acceptable weapons. International humanitarian law does not deal with questions concerning how many civilian deaths are acceptable for killing a high-valued target, or how many friendly lives are worth sacrificing to take control of a piece of territory. While, under international law, these are examples of military judgments, this remains an ethical decision for the military leader responsible.

Building ethical frameworks into AI will help the military comply with international humanitarian law and leverage new opportunities while predicting and preventing costly mistakes in four ways.

Four Military Benefits of an Ethical AI Framework

Designing an ethical framework for AI will benefit the military by forcing its leaders to reexamine existing ethical frameworks. In order to supply the benchmark data on which AI can learn, leaders will need to define, label, and score choice options in ethical dilemmas. In doing so they will have three primary theoretical frameworks to leverage for guidance: consequentialist, deontological, and virtue. While consequentialist ethical theories focus on the consequences of the decision (e.g., expected lives saved), deontological ethical theories are concerned with the compliance with a system of rules (refusing to lie based on personal beliefs and values despite the possible outcomes). Virtue ethical theories are concerned with instilling the right amount of a virtuous quality into a person (too little courage is cowardice; too much is rashness; the right amount is courage). A common issue cited as anobstacle to machine ethicsis the lack of agreement on which theory or combination of theories to follow leaders will have to overcome this obstacle. This introspection will help them better understand their ethical framework, clarify and strengthen the militarys shared moral system, andenhance human agency.

Second, AI can recommend decisions that consistently reflect a leaders preferred ethical decision-making process. Even in high-stakes situations, human decision-making is prone to influence from factors that have little or nothing to do with the underlying choice. Things like poor nutrition, fatigue, and stress all common in warfare can lead to biased and inconsistent decision-making. Other influences, such as acting in ones self-interest or extreme emotional responses, can also contribute tomilitary members making unethical decisions. AI, of course, does not become fatigued or emotional. The consistency of AI allows it to act as a moral adviser by providing decision-makers morally relevant data leaders can rely on as their judgment becomes impaired. Overall, this can increase the confidence of young decision-makers, a concern thecommander of U.S. Army Training and Doctrine Commandbrought up early last year.

Third, AI can help ensure that U.S. military leaders make the right ethical choice however they define that in high-pressure situations. Overwhelming the adversary is central to modern warfare. Simultaneous attacks anddeception operationsaim to confuse decision-makers to the point where they can no longer use good judgment. AI can process and correlate massive amounts of data to provide not only response options, but also probabilities that a given option will result in an ethically acceptable outcome. Collecting battlefield data, processing the information, and making an ethical decision is very difficult for humans in a wartime environment. Although the task would still be extremely difficult, AI can gather and process information more efficiently than humans. This would be valuable for the military. For example, AI that is receiving and correlating information from sensors across the entire operating area could estimate non-combatant casualties, the proportionality of an attack, or social reactions from observing populations.

Finally, AI can also extend the time allowed to make ethical decisions in warfare. For example, a central concern in modern military fire support is the ability to outrange the opponent, to be able to shoot without being shot. The race to extend the range of weapons to outpace adversaries continues to increase the time between launch and impact. Future warfare will see weapons that are launched and enter an area that is so heavily degraded and contested that the weapon will lose external communication with the decision-maker who chose to fire it. Nevertheless, as the weapon moves closer to the target, it could gain situational awareness on the target area and identify changes pertinent to the ethics of striking a target. If equipped with onboard AI operating with an ethical framework, the weapon could continuously collect, correlate, and assess the situation throughout its flight to meet the parameters of its programmed framework. If the weapon identified a change in civilian presence or other information altering the legitimacy of a target, the weapon could divert to a secondary target, locate a safe area to self-detonate, or deactivate its fuse. This concept could apply to any semi- or fully autonomous air, ground, maritime, or space assets. The U.S. military could not afford a weapon system deactivating or returning to base in future conflicts each time it loses communication with a human. If an AI-enabled weapon loses the ability to receive human input, for whatever reason, an ethical framework will allow the mission to continue in a manner that aligns the weapons actions with the intent of the operator.

Conclusion

Building an ethical framework for AI will help clarify and strengthen the militarys shared moral system. It will allow AI to act as a moral adviser and provide feedback as the judgment of decision-makers becomes impaired. Similarly, an ethical framework for AI will maximize the utility of its processing power to help ensure ethical decisions when human cognition is overwhelmed. Lastly, providing AI an ethical framework can extend the time available to make ethical decisions. Of course, AI is only as good as the data it is provided.

AI should not replace U.S. military leaders as ethical decision-makers. Instead, if correctly designed, AI should clarify and amplify the ethical frameworks that U.S. military leaders already bring to war. It should help leaders grapple with their own moral frameworks, and help bring those frameworks to bear by processing more data than any decision-maker could, in places where no decision-maker could go.

AI may create new programming challenges for the military, but not new ethical challenges. Grappling with the ethical implications of AI will help leaders better understand moral tradeoffs inherent in combat. This will unleash the full potential of AI, and allow it to increase the speed of U.S. decision-making to a rate that outpaces its adversaries.

Ray Reeves is a captain in the U.S. Air Force and a tactical air control party officer and joint terminal attack controller (JTAC) instructor and evaluator at the 13thAir Support Operations Squadron on Fort Carson, Colorado. He has multiple combat deployments and is a doctoral student at Indiana Wesleyan University, where he studies organizational leadership. The views expressed here are his alone and do not necessarily reflect those of the U.S. government or any part thereof. Linkedin.

Image: U.S. Marine Corps (Photo by Lance Cpl. Nathaniel Q. Hamilton)

View original post here:
The Ethical Upside to Artificial Intelligence - War on the Rocks