Archive for February, 2020

How to Train Your AI Soldier Robots (and the Humans Who Command Them) – War on the Rocks

Editors Note: This article was submitted in response to thecall for ideas issued by the co-chairs of the National Security Commission on Artificial Intelligence, Eric Schmidt and Robert Work. It addresses the third question (part a.), which asks how institutions, organizational structures, and infrastructure will affect AI development, and will artificial intelligence require the development of new institutions or changes to existing institutions.

Artificial intelligence (AI) is often portrayed as a single omnipotent force the computer as God. Often the AI is evil, or at least misguided. According to Hollywood, humans can outwit the computer (2001: A Space Odyssey), reason with it (Wargames), blow it up (Star Wars: The Phantom Menace), or be defeated by it (Dr. Strangelove). Sometimes the AI is an automated version of a human, perhaps a human fighters faithful companion (the robot R2-D2 in Star Wars).

These science fiction tropes are legitimate models for military discussion and many are being discussed. But there are other possibilities. In particular, machine learning may give rise to new forms of intelligence; not natural, but not really artificial if the term implies having been designed in detail by a person. Such new forms of intelligence may resemble that of humans or other animals, and we will discuss them using language associated with humans, but we are not discussing robots that have been deliberately programmed to emulate human intelligence. Through machine learning they have been programmed by their own experiences. We speculate that some of the characteristics that humans have evolved over millennia will also evolve in future AI, characteristics that have evolved purely for their success in a wide range of situations that are real, for humans, or simulated, for robots.

As the capabilities of AI-enabled robots increase, and in particular as behaviors emerge that are both complex and outside past human experience, how will we organize, train, and command them and the humans who will supervise and maintain them? Existing methods and structures, such as military ranks and doctrine, that have evolved over millennia to manage the complexity of human behavior will likely be necessary. But because robots will evolve new behaviors we cannot yet imagine, they are unlikely to be sufficient. Instead, the military and its partners will need to learn new types of organization and new approaches to training. It is impossible to predict what these will be but very possible they will differ greatly from approaches that have worked in the past. Ongoing experimentation will be essential.

How to Respond to AI Advances

The development of AI, especially machine learning, will lead to unpredictable new types of robots. Advances in AI suggest that humans will have the ability to create many types of robots, of different shapes, sizes, or degrees of independence or autonomy. It is conceivable that humans may one day be able to design tiny AI bullets to pierce only designated targets, automated aircraft to fly as loyal wingmen alongside human pilots, or thousands of AI fish to swim up an enemys river. Or we could design AI not as a device but as a global grid that analyzes vast amounts of diverse data. Multiple programs funded by the Department of Defense are on their way to developing robots with varying degrees of autonomy.

In science fiction, robots are often depicted as behaving in groups (like the robot dogs in Metalhead). Researchers inspired by animal behaviors have developed AI concepts such as swarms, in which relatively simple rules for each robot can result in complex emergent phenomena on a larger scale. This is a legitimate and important area of investigation. Nevertheless, simply imitating the known behaviors of animals has its limits. After observing the genocidal nature of military operations among ants, biologists Bert Holldobler and E. O. Wilson wrote, If ants had nuclear weapons, they would probably end the world in a week. Nor would we want to limit AI to imitating human behavior. In any case, a major point of machine learning is the possibility of uncovering new behaviors or strategies. Some of these will be very different from all past experience; human, animal, and automated. We will likely encounter behaviors that, although not human, are so complex that some human language, such as personality, may seem appropriately descriptive. Robots with new, sophisticated patterns of behavior may require new forms of organization.

Military structure and scheme of maneuver is key to victory. Groups often fight best when they dont simply swarm but execute sophisticated maneuvers in hierarchical structures. Modern military tactics were honed over centuries of experimentation and testing. This was a lengthy, expensive, and bloody process.

The development of appropriate organizations and tactics for AI systems will also likely be expensive, although one can hope that through the use of simulation it will not be bloody. But it may happen quickly. The competitive international environment creates pressure to use machine learning to develop AI organizational structure and tactics, techniques, and procedures as fast as possible.

Despite our considerable experience organizing humans, when dealing with robots with new, unfamiliar, and likely rapidly-evolving personalities we confront something of a blank slate. But we must think beyond established paradigms, beyond the computer as all-powerful or the computer as loyal sidekick.

Humans fight in a hierarchy of groups, each soldier in a squad or each battalion in a brigade exercising a combination of obedience and autonomy. Decisions are constantly made at all levels of the organization. Deciding what decisions can be made at what levels is itself an important decision. In an effective organization, decision-makers at all levels have a good idea of how others will act, even when direct communication is not possible.

Imagine an operation in which several hundred underwater robots are swimming up a river to accomplish a mission. They are spotted and attacked. A decision must be made: Should they retreat? Who decides? Communications will likely be imperfect. Some mid-level commander, likely one of the robot swimmers, will decide based on limited information. The decision will likely be difficult and depend on the intelligence, experience, and judgment of the robot commander. It is essential that the swimmers know who or what is issuing legitimate orders. That is, there will have to be some structure, some hierarchy.

The optimal unit structure will be worked out through experience. Achieving as much experience as possible in peacetime is essential. That means training.

Training Robot Warriors

Robots with AI-enabled technologies will have to be exercised regularly, partly to test them and understand their capabilities and partly to provide them with the opportunity to learn from recreating combat. This doesnt mean that each individual hardware item has to be trained, but that the software has to develop by learning from its mistakes in virtual testbeds and, to the extent that they are feasible, realistic field tests. People learn best from the most realistic training possible. There is no reason to expect machines to be any different in that regard. Furthermore, as capabilities, threats, and missions evolve, robots will need to be continuously trained and tested to maintain effectiveness.

Training may seem a strange word for machine learning in a simulated operational environment. But then, conventional training is human learning in a controlled environment. Robots, like humans, will need to learn what to expect from their comrades. And as they train and learn highly complex patterns, it may make sense to think of such patterns as personalities and memories. At least, the patterns may appear that way to the humans interacting with them. The point of such anthropomorphic language is not that the machines have become human, but that their complexity is such that it is helpful to think in these terms.

One big difference between people and machines is that, in theory at least, the products of machine learning, the code for these memories or personalities, can be uploaded directly from one very experienced robot to any number of others. If all robots are given identical training and the same coded memories, we might end up with a uniformity among a units members that, in the aggregate, is less than optimal for the unit as a whole.

Diversity of perspective is accepted as a valuable aid to human teamwork. Groupthink is widely understood to be a threat. Its reasonable to assume that diversity will also be beneficial to teams of robots. It may be desirable to create a library of many different personalities or memories that could be assigned to different robots for particular missions. Different personalities could be deliberately created by using somewhat different sets of training testbeds to develop software for the same mission.

If AI can create autonomous robots with human-like characteristics, what is the ideal personality mix for each mission? Again, we are using the anthropomorphic term personality for the details of the robots behavior patterns. One could call it a robots programming if that did not suggest the existence of an intentional programmer. The robots personalities have evolved from the robots participation in a very large number of simulations. It is unlikely that any human will fully understand a given personality or be able to fully predict all aspects of a robots behavior.

In a simple case, there may be one optimum personality for all the robots of one type. In more complicated situations, where robots will interact with each other, having robots that respond differently to the same stimuli could make a unit more robust. These are things that military planners can hope to learn through testing and training. Of course, attributes of personality that may have evolved for one set of situations may be less than optimal, or positively dangerous, in another. We talk a lot about artificial intelligence. We dont discuss artificial mental illness. But there is no reason to rule it out.

Of course, humans will need to be trained to interact with the machines. Machine learning systems already often exhibit sophisticated behaviors that are difficult to describe. Its unclear how future AI-enabled robots will behave in combat. Humans, and other robots, will need experience to know what to expect and to deal with any unexpected behaviors that may emerge. Planners need experience to know which plans might work.

But the human-robot relationship might turn out to be something completely different. For all of human history, generals have had to learn their soldiers capabilities. They knew best exactly what their troops could do. They could judge the psychological state of their subordinates. They might even know when they were being lied to. But todays commanders do not know, yet, what their AI might prove capable of. In a sense, it is the AI troops that will have to train their commanders.

In traditional military services, the primary peacetime occupation of the combat unit is training. Every single servicemember has to be trained up to the standard necessary for wartime proficiency. This is a huge task. In a robot unit, planners, maintainers, and logisticians will have to be trained to train and maintain the machines but may spend little time working on their hardware except during deployment.

What would the units look like? What is the optimal unit rank structure? How does the human rank structure relate to the robot rank structure? There are a million questions as we enter uncharted territory. The way to find out is to put robot units out onto test ranges where they can operate continuously, test software, and improve machine learning. AI units working together can learn and teach each other and humans.

Conclusion

AI-enabled robots will need to be organized, trained, and maintained. While these systems will have human-like characteristics, they will likely develop distinct personalities. The military will need an extensive training program to inform new doctrines and concepts to manage this powerful, but unprecedented, capability.

Its unclear what structures will prove effective to manage AI robots. Only by continuous experimentation can people, including computer scientists and military operators, understand the developing world of multi-unit human and robot forces. We must hope that experiments lead to correct solutions. There is no guarantee that we will get it right. But there is every reason to believe that as technology enables the development of new and more complex patterns of robot behavior, new types of military organizations will emerge.

Thomas Hamilton is a Senior Physical Scientist at the nonprofit, nonpartisan RAND Corporation. He has a Ph.D. in physics from Columbia University and was a research astrophysicist at Harvard, Columbia, and Caltech before joining RAND. At RAND he has worked extensively on the employment of unmanned air vehicles and other technology issues for the Defense Department.

Image: Wikicommons (U.S. Air Force photo by Kevin L. Moses Sr.)

Here is the original post:
How to Train Your AI Soldier Robots (and the Humans Who Command Them) - War on the Rocks

Ron Wyden: Modifying Section 230 Will Give More Censorship Power To Trump; And Lock In Facebook’s Dominance – Techdirt

from the exactly dept

We've already pointed out that Facebook's latest moves to say it's okay to strip away Section 230's protections are all about giving Facebook more power and harming competitors -- and now the author of Section 230, Senator Ron Wyden, has put out quite an op-ed in the Washington Post explaining just how much damage would be done in chipping away at Section 230. In particular, he highlights two key reasons why we shouldn't do it: (1) It would lock in the most powerful companies like Facebook and Google (even as misguided critics seem to think taking away Section 230 protections will harm them), and (2) It will enable the Trump administration to increase online censorship of marginalized voices.

On the first point, the argument is the one I made regarding Facebook's new stance, though Wyden expresses it succinctly:

Some have argued that repealing Section 230 would punish Facebook and Google for their failures. Thats simply not true. The biggest tech companies have enough lawyers and lobbyists to survive virtually any regulation Congress can concoct. Its the start-ups seeking to displace Big Tech that would be hammered by the constant threat of lawsuits.

He notes, as we have in the past, that most of the lobbying to gut 230 is being lead by industries that failed to adapt to the internet, and are now using 230 as a hammer to try to stay relevant.

The argument about speech is equally as important:

Im certain this administration would use power to regulate speech to punish its enemies and protect its allies. It would threaten Facebook or YouTube for taking down white supremacist content. It would label Black Lives Matter activists as purveyors of hate.

Again, this is exactly what we've warned about. Section 230 has created spaces online for the most marginalized to speak out -- and they will be the first to be silenced. Indeed, that's exactly what we've already seen post SESTA. The law that was passed in the name of "protecting sex trafficking victims" has actually put sex workers at risk. Wyden points out that the law appears to have done the opposite of what its backers promised:

Backpage was shut down before SESTA even went into effect. And sex workers have been driven to the dark Web or the streets, where sex trafficking has increased dramatically. The most vulnerable group bore the brunt of this law.

And the same is likely for any other attempt to attack 230 as well.

What's really incredible in all of this is how little those looking to modify or remove 230 seem to even understand 230. They seem to blame all sorts of societal problems on 230, even though all 230 has done is allow people to express themselves. And from there, the complaints against 230 are often contradictory. Some are worried that two much speech is silenced through moderation, while others complain that not enough speech is silenced. But neither is a 230 problem. They are all just representations of the impossibility of pleasing everyone when it comes to moderation policies. But taking away 230 or even modifying it won't change any of that. All it will do is lead to much greater censorship, and much more power for the biggest internet companies.

As is often the case, it would be nice if others in Congress actually listened to Ron Wyden on this -- as he's been right since the very beginning, and every time people ignore him, they end up looking foolish. Unfortunately, I fear that they will end up looking foolish yet again.

Filed Under: censorship, competition, free speech, ron wyden, section 230Companies: facebook

Go here to see the original:
Ron Wyden: Modifying Section 230 Will Give More Censorship Power To Trump; And Lock In Facebook's Dominance - Techdirt

Evidence That Conservative Students Really Do Self-Censor – The Atlantic

The report provides strong confirmation that conservatives face a hostile campus.

Among students who self-identify as liberals, some 10 percent said they hear disrespectful, inappropriate, or offensive comments about foreign students at least several times a semester, 14 percent said they hear disparaging comments about Muslims, 20 percent said they hear such comments about African Americans, 20 percent said they hear such comments about Christians, 21 percent said they hear such comments about LGBTQ individuals, and 57 percent said they hear such comments about conservatives. Among moderates, 68 percent said that they hear disrespectful, inappropriate, or offensive comments about conservatives at least several times a semester.

Out conservatives may face social isolation. Roughly 92 percent of conservatives said they would be friends with a liberal, and just 3 percent said that they would not have a liberal friend. Among liberals, however, almost a quarter said they would not have a conservative friend. Would UNC be a better place without conservatives? About 22 percent of liberals said yes. Would it be a better place without liberals? Almost 15 percent of conservatives thought so.

Lee C. Bollinger: Free speech on campus is doing just fine, thank you

Self-identified conservative students do in fact face distinct challenges related to viewpoint expression at UNC, the authors conclude. They urge a conversation about how the campus can become more accepting of conservative students as well as more willing to hear and engage with conservative ideas. After all, they ask, who would dispute that universities should be places where each idea is considered on its own terms, and not prejudged? Where sincerely held conclusions can be offered up for vigorous and civil contestation? Where students are assumed to be arguing in good faith and where they feel valued and respected, even should they turn out to be wrong?

As important, the authors correctly emphasize that the wrong way to interpret our report would be to see it as pitting liberals against conservatives, not only because many liberals and moderates harbor similar anxieties about sharing earnest views, but also because even though political hostility emerges disproportionately from the political left at UNC, that hostility comes from a minority, not a majority, of liberals. Tolerant students belong to a cross-ideological majority. While divided in their politics, both are ill-served by the minority faction of intolerant censors.

Self-censorship is among several significant reasons to believe that free speech remains under threat on American campuses, harming undergraduate education. I try to avoid talk of crisis, because I believe that free speech is perpetually threatened and requires constant vigilance to sustain. But however we label the status quo, Americas professors ought to be aware of these problems.

Originally posted here:
Evidence That Conservative Students Really Do Self-Censor - The Atlantic

This is state censorship of the internet – Spiked

The UK government has unveiled its proposals to tackle so-called online harms. It wants to regulate social media through Ofcom, which currently regulates the media and the telecoms industry.

Under the proposals, Ofcom will be empowered to ensure that tech firms adopt a duty of care towards users, especially children. This is to protect users, first, from illegal content, such as child pornography, which Ofcom will require tech firms to remove; and second, from harmful but legal content. In the second case, Ofcom will require tech firms to be upfront about what behaviour is acceptable and unacceptable on their sites, in the shape of transparently enforced terms and conditions. So, if a social-media platform states that promoting self-harm is unacceptable, Ofcom is empowered to ensure that stipulation is enforced. In addition, all companies will need to ensure a higher level of protection for children, and take reasonable steps to protect them from inappropriate or harmful content.

Failure to comply with Ofcoms demands could, or so at least one report suggests, result in executives at offending companies receiving substantial fines or even prison sentences.

Full details about the legislation and the powers it entails will be released this spring. But make no mistake: even as it stands this plan is a serious threat to internet freedom.

For one thing, these proposals dont just encompass the internets social-media behemoths, such as Facebook. Ofcoms writ will run to all sites that provide services allowing the sharing of user-generated content or user interactions. That means if you run a pressure group, or a political website, and publish material or comments from users, then you are potentially in Ofcoms crosshairs.

Whats more, quite apart from demanding that tech firms take down illegal material, Ofcom will require all sites featuring user-generated content to ensure their own terms and conditions are enforced. That is quite a burden. First, all sites will be forced to draft terms and conditions, and conceive of thresholds for harmful but legal content. They will then also have to come up with processes and systems to deal with complaints and allow for redress. And then they will have to take responsibility for enforcing the terms and conditions or face the potential wrath of Ofcom.

Empowering Ofcom to enforce sites own regulation of harmful but legal content could be disastrous. And you can bet that there will be plenty of people and pressure groups itching to use this new state power to suppress discussions they would rather not see take place.

Yes, the plan states that safeguards for freedom of expression have been built in throughout the framework. Hence the freedom to publish harmful but legal content as long as its clearly permitted in a platforms terms and conditions. But unfortunately, even this freedom is qualified by the imperative to respect the rights of children, and the corresponding demand that companies ensure there is a higher level of protection for children. From this, it could follow that there will be removal-of-content orders aimed at legal discussions of, for example, the morality of suicide, or anti-vaccination, because they are deemed too harmful to children.

Besides, the line between legal and illegal speech is pretty fluid anyway. Despite former policeman Harry Millers minor victory over an over-intrusive Humberside Police last week, the catch-all prohibition in section 127 of the Communications Act 2003 on grossly offensive material online is open to interpretation. It still means that any pungent or forceful statement that happens to annoy some interest group or other could give Ofcom reason to think it criminal and demand removal.

For all home secretary Priti Patels talk of needing to tame the Wild West of the internet in order to protect our children, it is clear what we have here: a plan for worryingly sweeping restrictions on what we can say, or allow others to say, online not to mention an enormous increase in bureaucrats power to snoop.

It is not even clear that any of this will be very effective. Even Ofcom accepts that it can only realistically intervene in sites in the UK. Depending on how the government responds to criticisms already made of its proposals, we shall have to see whether its plans merely prompt controversial sites to move abroad, or even to some convenient offshore jurisdiction, like the Isle of Man. If this happens, there will be precious little Ofcom will be able to do about them even if what they say is truly criminal. Even by Ofcoms curious standards, that would be a spectacular own goal.

Andrew Tettenborn is a professor of commercial law and a former Cambridge admissions officer.

Picture by: Getty.

Read the original post:
This is state censorship of the internet - Spiked

Organizations pen letter to Apple calling on an end to censorship in China – iMore

A coalition of civil, political and human rights groups have penned an open letter to Apple, calling on the company to stop enabling censorship and surveillance in China.

As spotted by Phayul: the letter was signed by groups such as Tibet Action Institute, Free Tibet, Keep Taiwan Free and SumOfUs.

The letter, addressed directly to Phil Schiller, reads:

We are a coalition of civil, political, human rights, freedom of expression, corporate accountability, privacy, and digital security organizations, many of whom are longtime Apple users. Together we represent communities in the US and abroad gravely impacted by Apple's decisions with regard to the Chinese App Store and user information. We are writing to express our serious concerns over Apple's confirmed removal of applications from the iOS App Store in China, including 1,000+ Virtual Private Networks (VPNs) and news apps like the New York Times and Quartz, as well as the transfer of Apple users' iCloud data to a Chinese state-run telecom company. Many of our organizations have submitted letters1 to CEO Tim Cook raising these concerns and have yet to receive any response. Given that Apple's removal of VPNs and news apps sets a blatant and unethical double standard for the Chinese App Store, we are now bringing our serious concerns directly to you, the head of the App Store.

The letter highlights concerns such as Apple's "compliance with China's censorship and surveillance demands", which puts the App Store's actions "in direct contradiction" with its claim that "Privacy is a fundamental human right." It continues:

In reality, Apple's actions demonstrate that privacy is only a right for certain people. Since Apple removed VPNs from the App Store, iOS users in China have been left unable to easily protect their internet communications from pervasive surveillance. Apple's closed App Store ecosystem forces users who want to install banned applications to jailbreak their devices and give up the security measures that make Apple devices unique. Additionally, since relocating China's Apple iCloud data to mainland China, Apple has further ensured that hundreds of millions of people are forced to choose between allowing their data to be obtained without effective due process, or forgoing the online storage and backup measures your company has diligently developed.

The letter also mentions incidents such as the HKmap.live app, as well as the removal of the Taiwanese flag for users in Hong Kong, Macau and Mainland China.

The letter concludes by asking that Apple meets with the group to discuss the concerns outlined, as well as asking that Apple pressure governments to be "specific, transparent, and consistent in their requirements". You can read the letter in its entirety here.

You can buy some very expensive cars that don't support CarPlay yet, but someone managed to make it work with their Raspberry Pi 3.

Amid growing concerns about the threat posed by coronavirus, Mobile World Congress 2020, one of the year's biggest tech events, was cancelled last week. Here's what it means for the industry, vendors, and consumers.

Hell has frozen over. Pigs are flying. And cats and dogs are living together in perfect harmony

Apple's iconic rainbow logo has often been associated with the LGBT+ movement. Show your support by wearing an Apple-themed Pride t-shirt, including the one we like the best.

Originally posted here:
Organizations pen letter to Apple calling on an end to censorship in China - iMore