Media Search:



In regard to Afghanistan, Bush and Obama made three major mistakes – D+C Development and Cooperation

Led by the USA, the international community took an ambiguous approach to Afghanistan in the past two decades. The goal was to build a modern state, but from the very start, a light footprint was preferred. As Paul D. Miller told Hans Dembowski in an interview, three major mistakes made by two consecutive US presidents ultimately caused failure.

Today, the common narrative is that it was wrong to try to build a modern, democratic Afghan state. As I remember it, however, the necessity of doing so was generally understood 20 years ago. After the attacks on New York and Washington DC of 11 September 2001, the goal was to ensure that Afghanistan would never become a safe haven for terrorists again. The implication was that a power vacuum was unacceptable.Exactly, there was no other choice. That is what former officials of the Bush administration are still saying today. In 2001/2002 that view was shared internationally, including by NATO leaders and UN officials. Unfortunately, this insight did not make them draft a coherent state-building strategy. State building is a complex challenge and takes a lot of time, however. Institutions have to be established and consolidated step-by-step. Capable staff cannot simply be bought. To earn public trust, officers need training and considerable practical experience. However, we and our allies did not commit to long-term engagement.

To what extent was state building attempted in Afghanistan at all?It varied from year to year. In the first five years, the focus was on political reconstruction in the sense of holding elections and passing a constitution. Both worked out fairly well. The constitution was based on Afghanistans 1964 constitution and updated by Afghans who represented the countrys people and understood its constitutional history. The constitution was Afghan owned rather than imposed by western powers. On the downside, there were no significant efforts to build infrastructure. Afghanistan badly needed roads, hospitals and schools, but also institutions such as law courts and municipal governments. Things changed somewhat in the years 2007 to 2011 when insurgents were gaining strength. In that period, much more was done to ramp up the legal system, develop rural areas and build administrative capacities. However, by that point, reconstruction efforts were rushed and thus often wasteful, the conflict further intensified, and international support later focused almost entirely on the Afghan army and police.

Did western allies fight or foster corruption?They did both. The core problem was that they tried to do too much too fast, especially in the second phase that I just mentioned. A lot of money suddenly flowed into a very poor country that had recently been the worlds worst failed state and lacked competent institutions. The result was the rule of money. The illegal-drugs trade obviously added to the problems. Poppy cultivation began to expand fast from 2006 on, and by 2009 or so, the Taliban were relying on opium money. Others were involved in the drugs economy too, including influential leaders who officially supported the government. By the end of 2010, a destructive dynamic had set in. The focus was increasingly on fighting insurgents and not on reconstruction. The US administration lost faith in state building, which obviously became more difficult the more the conflict escalated.

Why did things go wrong?Well, I think there were three major mistakes in the first two presidential administrations:

In the later two administrations, I have nothing good to say about President Joe Bidens withdrawal or about President Donald Trumps peace negotiations with the Taliban, which bypassed our Afghan partners and placed no meaningful demands on the Taliban, but several decisive mistakes were made long before Trump or Biden took office.

What role did other western governments play?Well, Washington basically called the shots. At first, the idea was that individual governments would assume specific responsibilities in Afghanistan, but a sense of frustration set in by 2006. The Bush administration felt that our allies were not doing enough, which was a bit unfair, because it wasnt doing enough itself.

I find it bewildering that western leaders cared so little about the drugs economy. It accounts for up to 30% of Afghanistans gross national product (GNP). Such a huge black market is incompatible with a modern state and the rule of law.There were actually many proposals for solving the drugs problem. Some suggested saffron cultivation could be an alternative to poppy cultivation. Others said the international community should simply buy the entire harvest to produce medical morphine. There were attempts to eradicate poppy fields. Everything stayed piecemeal, however. The point is that you cannot make meaningful progress against the drug trade if you do not have a legal system. That is especially true in a war zone. We ended up with a chicken and egg problem. Without peace, you cannot build a legal system and other institutions, but you cannot have peace, unless you have a legal system.

It is also estimated that aid accounted for about 50% of Afghanistans GNP in recent years. There really was not much of an Afghan state.Well, you have to consider the history of Afghanistan, which has basically been a client state for hundreds of years. For a long time, it depended on the British Empire, later on the Soviet Union. Afghanistans official government always relied on outside funding and used that funding to pay off local clients in exchange for their support. Nonetheless, the country was largely at peace thanks to many different compromises and accommodations. That changed with the Soviet invasion of 1979.

Western failure in Afghanistan is now often blamed on Afghans supposedly medieval mindset. I find that rhetoric condescending and misleading. The real problem is that Afghan society is controlled by warlords as medieval Europe was, by the way. People want to survive. They do not care much about whether the armed men in front of them are legitimate in one way or another. The priority is not to get hurt and somehow keep feeding ones family. Official legislation hardly matters in the rural regions of developing countries, where traditions rule daily life and it is certainly not relevant in situations of strife.The Soviets destroyed the structures of Afghan society, such as the tribal networks, landowning khans, and local mullahs. That led to the rise of warlordism and, eventually, the drug economy. After 2001, the international community should not have tolerated power vacuums at the local level. The results were persisting warlordism and opportunities for the Taliban. In the west, everyone knows that Taliban rule was brutal when they controlled the country in the late 1990s. It is less well understood that they nonetheless provided a sense of order, which was obviously very rough. They even banned poppy cultivation for one year, though many observers argue they only did so to drive up the global opium price. What matters now, however, is that Afghans are tired after four decades of war. They long for safety and some believed the Taliban were good at providing it.

And they feel disappointed in western powers. Could the US-led intervention have achieved more?Well, both Bush and Obama signed agreements with Afghan governments, pledging long-term support. I am convinced we could have done more had we had more patience. State building cannot be done fast and definitely not quickly in a very poor, war-torn country. The depressing truth is that our leaders chose the right words, but did not follow up with action. Our Afghan partners lost faith, and the USA failed to fulfil what our presidents had promised.

Paul D. Miller is a professor of the practice of international affairs at Georgetown University in Washington DC.[emailprotected]

Go here to see the original:
In regard to Afghanistan, Bush and Obama made three major mistakes - D+C Development and Cooperation

Why Artificial Intelligence Research Needs More Women – Women Love Tech

Its likely no surprise that women are still massively underrepresented in the tech industry today. Even with a push over the last few years for more women to pursue careers in STEMscience, technology, engineering, and mathematicsthey still make up a tiny percentage of those working in the field. Data shows that of those doing STEM-focused research around the world, less than 30% are women.

Unfortunately, when you narrow the focus down to women working specifically in smart tech and machine learning, the numbers get even smaller. You might wonder why it matters who is behind the data and code creation when its essentially a non-gendered machine or robot doing all of the processing, but it does. Machines arent inherently biased, but humans are, and when humans are teaching machines how to learn and what to do, our biases naturally become part of the code.

Our computers, phones, and any other smart devices that we use today utilize technology that mimics our thought and decision-making processes. So if the majority of people working with smart tech such as artificial intelligence are men, then anything that utilizes AI will skew towards the male perspective.

Though artificial intelligence might bring to mind images of a future world run by robot overlords, it is far less ominous and science fiction-like than that. AI is already a part of so many things that we use and interact with dailyits not the futureits the present.

Though artificial intelligence sounds like a far-fetched term, it is basically the use of algorithms in computer systems to mimic how humans process information, and the more input it receives, the smarter it gets. Its not just a part of our personal devices either; businesses are using AI to improve customer service and interpret data to further develop their systems and run more efficiently.

Google is even leveraging the power of AI to create tools to enhance healthcare and help conservationists and scientists save endangered species and preserve indigenous languages. There is no end to the way we can harness the usefulness of AI. We can apply it to numerous situations to advance our capabilities and solve complex real-world problems.

Artificial intelligence and deep learning systems are not just likely to change our future; it is already quite evident that they will and are already doing so. As humans, we are limited by our own minds and capabilities. There is only so much we can dobut AI and deep learning machines will allow us to scale our potential and complete tasks, projects, and missions that we would otherwise not be able to do.

Deep learning refers to systems with more advanced neural networks that can actually draw conclusions from the input it receives and adapt to changes. In contrast, some more basic and earlier forms of AI machines can only do what they are specifically taught to do.

Some examples of deep learning applications include:

So how will deep learning and other advanced forms of AI affect our future? The simple answer is that it will make everything easier and more efficient. As mentioned above, as humans, we can only do so much. We should continue to maintain a human presence and interactions within our business operationspeople still need that human touch. But when it comes to interpreting data and solving complex equations to optimise our systems and make new advancements, we need the help of AI.

Several industries can benefit from having more women, but especially those specifically handling the development and research of machine learning systems. AI is already an integral part of our society and nearly all industries. Our economy and its infrastructure run on many systems that use AI every day. The problem is, when these systems are all designed primarily by men, the processes that the computers use to learn and interpret data will result in a skewed outcome.

Mark Minevich, a contributor to Forbes.com, writes, organisations will always fail to harness the fullest capacity of their digital innovations without including women, as machine learning technologies will be fed a constant stream of biased data, producing junk results that are not reflective of the full picture, causing potentially catastrophic harm to organisations. And hes right. If AI is becoming a significant part of our society and infrastructure, and women will undoubtedly continue to be a part of this society, we need to include them in the research and development of these systems.

From transportation and education to media, customer service, and healthcare and wellnessindustries are increasingly integrating artificial intelligence into their systems. Without more representation of women, the data these industries work off of and use to improve their operations will be deeply inaccurate. We dont just need more women researching AI; we must have them. Continuing to leave them out is not an option. It is vital to the growth and success of AI itself and our growth as a society, and our ability to advance.

Report by Beau Peters.

More:
Why Artificial Intelligence Research Needs More Women - Women Love Tech

Artificial intelligence success is tied to ability to augment, not just automate – ZDNet

Artificial intelligence is only a tool, but what a tool it is. It may be elevating our world into an era of enlightenment and productivity, or plunging us into a dark pit. To help achieve the former, and not the latter, it must be handled with a great deal of care and forethought. This is where technology leaders and practitioners need to step up and help pave the way, encouraging the use of AI to augment and amplify human capabilities.

Those are some of the observations drawn from Stanford University's recently released report, the next installment out of itsOne-Hundred-Year Study on Artificial Intelligence, an extremely long-term effort to track and monitor AI as it progresses over the coming century. The report, first launched in 2016, was prepared by a standing committee that includes a panel of 17 experts, and urges that AI be employed as a tool to augment and amplify human skills. "All stakeholders need to be involved in the design of AI assistants to produce a human-AI team that outperforms either alone. Human users must understand the AI system and its limitations to trust and use it appropriately, and AI system designers must understand the context in which the system will be used."

AI has the greatest potential when it augments human capabilities, and this is where it can be most productive, the report's authors argue. "Whether it's finding patterns in chemical interactions that lead to a new drug discovery or helping public defenders identify the most appropriate strategies to pursue, there are many ways in which AI can augment the capabilities of people. An AI system might be better at synthesizing available data and making decisions in well-characterized parts of a problem, while a human may be better at understanding the implications of the data -- say if missing data fields are actually a signal for important, unmeasured information for some subgroup represented in the data -- working with difficult-to-fully quantify objectives, and identifying creative actions beyond what the AI may be programmed to consider."

Complete autonomy "is not the eventual goal for AI systems," the co-authors state. There needs to be "clear lines of communication between human and automated decision makers. At the end of the day, the success of the field will be measured by how it has empowered all people, not by how efficiently machines devalue the very people we are trying to help."

The report examines key areas where AI is developing and making a difference in work and lives:

Discovery:"New developments in interpretable AI and visualization of AI are making it much easier for humans to inspect AI programs more deeply and use them to explicitly organize information in a way that facilitates a human expert putting the pieces together and drawing insights," the report notes.

Decision-making:AI helps summarize data too complex for a person to easily absorb. "Summarization is now being used or actively considered in fields where large amounts of text must be read and analyzed -- whether it is following news media, doing financial research, conducting search engine optimization, or analyzing contracts, patents, or legal documents. Nascent progress in highly realistic (but currently not reliable or accurate) text generation, such as GPT-3, may also make these interactions more natural."

AI as assistant:"We are already starting to see AI programs that can process and translate text from a photograph, allowing travelers to read signage and menus. Improved translation tools will facilitate human interactions across cultures. Projects that once required a person to have highly specialized knowledge or copious amounts of time may become accessible to more people by allowing them to search for task and context-specific expertise."

Language processing:Language processing technology advances have been supported by neural network language models, including ELMo, GPT, mT5, and BERT, that "learn about how words are used in context -- including elements of grammar, meaning, and basic facts about the world -- from sifting through the patterns in naturally occurring text. These models' facility with language is already supporting applications such as machine translation, text classification, speech recognition, writing aids, and chatbots. Future applications could include improving human-AI interactions across diverse languages and situations."

Computer vision and image processing:"Many image-processing approaches use deep learning for recognition, classification, conversion, and other tasks. Training time for image processing has been substantially reduced. Programs running on ImageNet, a massive standardized collection of over 14 million photographs used to train and test visual identification programs, complete their work 100 times faster than just three years ago." The report's authors caution, however, that such technology could be subject to abuse.

Robotics: "The last five years have seen consistent progress in intelligent robotics driven by machine learning, powerful computing and communication capabilities, and increased availability of sophisticated sensor systems. Although these systems are not fully able to take advantage of all the advances in AI, primarily due to the physical constraints of the environments, highly agile and dynamic robotics systems are now available for home and industrial use."

Mobility: "The optimistic predictions from five years ago of rapid progress in fully autonomous driving have failed to materialize. The reasons may be complicated, but the need for exceptional levels of safety in complex physical environments makes the problem more challenging, and more expensive, to solve than had been anticipated. The design of self-driving cars requires integration of a range of technologies including sensor fusion, AI planning and decision-making, vehicle dynamics prediction, on-the-fly rerouting, inter-vehicle communication, and more."

Recommender systems:The AI technologies powering recommender systems have changed considerably in the past five years, the report states. "One shift is the near-universal incorporation of deep neural networks to better predict user responses to recommendations. There has also been increased usage of sophisticated machine-learning techniques for analyzing the content of recommended items, rather than using only metadata and user click or consumption behavior."

The report's authors caution that "the use of ever-more-sophisticated machine-learned models for recommending products, services, and content has raised significant concerns about the issues of fairness, diversity, polarization, and the emergence of filter bubbles, where the recommender system suggests. While these problems require more than just technical solutions, increasing attention is paid to technologies that can at least partly address such issues."

Go here to read the rest:
Artificial intelligence success is tied to ability to augment, not just automate - ZDNet

Justice, Equity, And Fairness: Exploring The Tense Relationship Between Artificial Intelligence And The Law With Joilson Melo – Forbes

Law Library

AI is becoming more and more prevalent in society, with many people wondering how it will affect the law. How artificial intelligence is impacting our laws and what we can expect for future technology/legal interactions.

The conversation surrounding the relationship between AI and law also touches quite clearly on the ability to rely on Artificial Intelligence to deliver fair decisions and to enhance the legal systems delivery of equity and justice.

In this article, I share insights from my conversations on this topic with Joilson Melo, a Brazilian law expert, and programmer whose devotion to equity and fairness led to a historic change in the Brazilian legal system in 2019, this change mainly affected the system that controls all processes processed digitally in Brazil, the PJe (Electronic Judicial Process).

As a law student, Melo filed a request for action in the National Council of Justice (CNJ) against the Court of Justice of Mato Grosso, resulting in a decision allowing citizens to file applications in court electronically without a lawyer and within the Special Court, observing the value of the case, so that it does not exceed 20 minimum wages. Melos petition revealed provisions in the law that allowed for this and his victory enforced those provisions. The results for the underprivileged and those who couldnt afford lawyers have been immense.

On the relationship between AI and the Law, Melo remains a bit on the fence;

The purpose of the law is justice, equity, and fairness, says Melo.

Any technology that can enhance that is welcome in the legal arena. Artificial Intelligence has already been shown that it can be as biased as the data that it is fed. This instantly places a greater burden of care on us to ensure that it is adopted through a careful process in the legal space and society at large

The use of AI to predict jury verdicts has been around for quite some time now, but it's unclear whether or not an algorithm can accurately predict human behavior. There have also been studies that prove that machine learning algorithms can be used to help judges make sentencing decisions based on factors such as recidivism rates.

In theory, this seems to solve a glaring problem, the algorithm tools are supposed to predict criminal behavior and help judges make decisions based on data-driven recommendations and not their gut.

However, as Melo explains, this also presents some deep concerns for legal experts, AI risk assessment tools run on algorithms that are trained on historical crime data. In countries like America and many other nations, law enforcement has already been accused of targeting certain minorities and this is shown by the high number of these minorities in prisons. If the same data is fed, the AI is going to be just as biased.

Melo continues, Besides, the Algorithms turn correlative insights into causal insights. If the data shows that a particular neighborhood is correlated with high recidivism, it doesnt prove that this neighborhood caused recidivism in any given case. These are things that a Judge should be able to tell from his observations. Anything less is a far cry from justice, unless we figure out a way to cure the data.

As we continue developing smarter technologies, data protection becomes an increasingly important issue. This includes protecting private information from hackers and complying with GDPR standards across all industries that collect personal data about their customers.

Apart from the GDPR, not many countries have passed targeted laws that affect big data. According to the 2018 Technology Survey by the International Legal Technology Association, 100 percent of law firms with 700 or more lawyers use AI tools or are pursuing AI projects.

If this trend continues and meets with the willingness of courts and judges to adopt AI, then they would eventually fall into the category of companies that need to abide by the data protection rules. Client/Attorney privilege could be at risk of a hack and court decisions as well.

The need for stringent local laws that help regulate how data is received and managed has never been more clear, and this is why it is shocking that many governments have not acted faster.

Joilson Melo

Many governments have an unholy alliance with tech giants and the companies that deal most with data, says Melo.

These companies are at the front of national development and are the most attractive national propositions for investments. Leaders do not want to stifle them or be seen as impeding technological advancement. However, if the law must apply equally, governments should take a cue from the GDPR and start now before we see privacy violation worse than we already have.

As Artificial Intelligence becomes more ingrained in our lives, so do the legal issues that surround it.

One of the most prevalent legal questions is whether machines should be allowed to possess self-driving cars and deadly weapons. Self-driving cars are already on the market but they have a long way to go before they could replace human drivers. The technology has not been perfected yet and will require huge strides forward before we can say with certainty that these vehicles are safe for society at large.

The larger concerns about these touch on how easily these algorithms can be hacked and influenced externally.

AI and Weapons/War Crimes: The possibility of autonomous weapons systems has been touted in many spheres as a powerful way to identify and eliminate threats. This has come against strong pushback for obvious reasons. Empathy, concession, and a certain big-picture approach have always played crucial roles in war and border security. These are traits that we still cannot inculcate into an algorithm.

Human Rights Questions: One of the main questions that arise in the area of human rights is with regards to algorithmic transparency. There have been reports of people losing jobs, being denied loans, and being put on no-fly zones with no explanation other than, it was an algorithmic determination.

If this pattern persists the risk to human rights is enormous. The questions of cybersecurity vulnerabilities, AI bias, and lack of contestability are also concerns that touch on human rights.

Melos concern seems more targetted at the law and how it can be preserved as an arbiter of justice and enforcer of human rights and he rightly points out the implications of leaving these questions unanswered;

Deciding not to adopt AI in society and legal systems is deciding not to move forward as a civilization, Melo comments.

However, deciding to adopt AI blindly would see us move back into a barbaric civilization.I believe that the best approach is to take a piece-meal approach towards adoption; take a step, spot the problems, eliminate them and then take another step.

The law and legal practitioners stand to gain a lot from a proper adoption of AI into the legal system. Legal research is one area that AI has already begun to help out with. AI can streamline the thousands of results an internet or directory search would otherwise provide, offerring a smaller digestible handful of relevant authorities for legal research. This is already proving helpful and with more targeted machine learning it would only get better.

The possible benefits go on; automated drafts of documents and contracts, document review, and contract analysis are some of those considered imminent.

Many have even considered the possibilities of AI in helping with more administrative functions like the appointment of officers and staff, administration of staff, and making the citizens aware of their legal rights.

A future without AI seems bleak and laborious for most industries including the legal and while we must march on, we must be cautious about our strategies for adoption. This point is better put in the words of Joilson Melo; The possibilities are endless, but the burden of care is very heavy we must act and evolve with cautiously.

Thank you for your feedback!

Read the rest here:
Justice, Equity, And Fairness: Exploring The Tense Relationship Between Artificial Intelligence And The Law With Joilson Melo - Forbes

New report assesses progress and risks of artificial intelligence – Brown University

While many reports have been written about the impact of AI over the past several years, the AI100 reports are unique in that they are both written by AI insiders experts who create AI algorithms or study their influence on society as their main professional activity and that they are part of an ongoing, longitudinal, century-long study, said Peter Stone, a professor of computer science at the University of Texas at Austin, executive director of Sony AI America and chair of the AI100 standing committee. The 2021 report is critical to this longitudinal aspect of AI100 in that it links closely with the 2016 report by commenting on what's changed in the intervening five years. It also provides a wonderful template for future study panels to emulate by answering a set of questions that we expect future study panels to reevaluate at five-year intervals.

Eric Horvitz, chief scientific officer at Microsoft and co-founder of the One Hundred Year Study on AI, praised the work of the study panel.

"I'm impressed with the insights shared by the diverse panel of AI experts on this milestone report," Horvitz said. The 2021 report does a great job of describing where AI is today and where things are going, including an assessment of the frontiers of our current understandings and guidance on key opportunities and challenges ahead on the influences of AI on people and society.

In terms of AI advances, the panel noted substantial progress across subfields of AI, including speech and language processing, computer vision and other areas. Much of this progress has been driven by advances in machine learning techniques, particularly deep learning systems, which have made the leap in recent years from the academic setting to everyday applications.

In the area of natural language processing, for example, AI-driven systems are now able to not only recognize words, but understand how theyre used grammatically and how meanings can change in different contexts. That has enabled better web search, predictive text apps, chatbots and more. Some of these systems are now capable of producing original text that is difficult to distinguish from human-produced text.

Elsewhere, AI systems are diagnosing cancers and other conditions with accuracy that rivals trained pathologists. Research techniques using AI have produced new insights into the human genome and have sped the discovery of new pharmaceuticals. And while the long-promised self-driving cars are not yet in widespread use, AI-based driver-assist systems like lane-departure warnings and adaptive cruise control are standard equipment on most new cars.

Some recent AI progress may be overlooked by observers outside the field, but actually reflect dramatic strides in the underlying AI technologies, Littman says. One relatable example is the use of background images in video conferences, which became a ubiquitous part of many people's work-from-home lives during the COVID-19 pandemic.

To put you in front of a background image, the system has to distinguish you from the stuff behind you which is not easy to do just from an assemblage of pixels, Littman said. Being able to understand an image well enough to distinguish foreground from background is something that maybe could happen in the lab five years ago, but certainly wasnt something that could happen on everybodys computer, in real time and at high frame rates. Its a pretty striking advance.

As for the risks and dangers of AI, the panel does not envision a dystopian scenario in which super-intelligent machines take over the world. The real dangers of AI are a bit more subtle, but are no less concerning.

Some of the dangers cited in the report stem from deliberate misuse of AI deepfake images and video used to spread misinformation or harm peoples reputations, or online bots used to manipulate public discourse and opinion. Other dangers stem from an aura of neutrality and impartiality associated with AI decision-making in some corners of the public consciousness, resulting in systems being accepted as objective even though they may be the result of biased historical decisions or even blatant discrimination, the panel writes. This is a particular concern in areas like law enforcement, where crime prediction systems have been shown to adversely affect communities of color, or in health care, where embedded racial bias in insurance algorithms can affect peoples access to appropriate care.

As the use of AI increases, these kinds of problems are likely to become more widespread. The good news, Littman says, is that the field is taking these dangers seriously and actively seeking input from experts in psychology, public policy and other fields to explore ways of mitigating them. The makeup of the panel that produced the report reflects the widening perspective coming to the field, Littman says.

The panel consists of almost half social scientists and half computer science people, and I was very pleasantly surprised at how deep the knowledge about AI is among the social scientists, Littman said. We now have people who do work in a wide variety of different areas who are rightly considered AI experts. Thats a positive trend.

Moving forward, the panel concludes that governments, academia and industry will need to play expanded roles in making sure AI evolves to serve the greater good.

View post:
New report assesses progress and risks of artificial intelligence - Brown University