Media Search:



Harmon’s New Casebook Is First to Look at Law of the Police – UVA Law

As debates about policing pervade the public conversation, Professor Rachel Harmon of the University of Virginia School of Law has written the first casebook to look at the laws that govern police conduct in the United States.

The Law of the Police, published by Wolters Kluwer and available now, takes on the question of how the law shapes police-citizen encounters and how the law might be leveraged to make policing serve the public better.

Harmon, a former federal prosecutor who directs the Law Schools Center for Criminal Justice, has taught a course on the laws governing police for 15 years. She came to UVA Law in 2006 after spending eight years as a federal prosecutor in the U.S. Department of Justices Civil Rights Division.

Throughout her time in academia, she has wrestled with what role, if any, policing should have in peoples lives, and how best to prevent misconduct.

I came to the Law School from practice, where I spent years prosecuting civil rights cases, including against police officers, she said. Over time, I got frustrated with criminal prosecution as a response to police misconduct. Prosecuting illegal police violence can be important, but I knew there had to be better ways to prevent problems in policing.

Among her goals for the book, she said, was to look at how different laws and legal rules make policing more or less harmful.

The book is a reaction to the traditional approach to policing the police, which is rights-focused. For example, a common police practice she considers problematic is selectively asking drivers, based on a gut feeling, to open their trunks during a traffic stop with all of the officers conscious and unconscious biases in tow.

Lawyers have typically looked at such problems and argued that they violate Fourth Amendment doctrine or, if they dont, that the doctrine should be changed, she said. I see things differently.

In the evolution of her thoughts, Harmon first looked at how existing rights and remedies might be applied to curb policing that works against the public interest.

I spent my first couple of years as an academic looking at legal remedies to see whether they could be used to prevent problems in policing and tossing them over my shoulder, Harmon said. So civil rights damages actions, is that going to work? No, thats not going to work a lot of the time. Justice Department investigations of police departments, is that going to work? No, that wont work well enough either.

She then suggested enhancements to these existing tools, before going another way.

I wrote a couple of articles trying to improve rights and remedies before I started to write about how to think more broadly about police misconduct as a regulatory problem. The question is not only how to remedy police misconduct, but how to use law to get the public safety we want, both through policing and through other means.

Focusing on that question led Harmon to study the harms of policing and how the law overlooks them or contributes to them.

Moreover, studying the vast array of legal rules that shape policing and police departments led Harmon to realize how little of it lawyers and law students may know, she said.

Hopefully, the book can be a resource, not just for law students, but for academics, lawyers, police chiefs, journalists, activists, judges or just about anyone interested in how the law actually governs policing and how it might do so differently, whether thats reforming police departments or turning public safety over to nonpolice actors, she said.

She noted that the book is different than a criminal procedure textbook, which specifically prepares lawyers for the concepts they will need to know as future prosecutors or defenders. Her book is organized by police practices, such as stopping traffic, using force, maintaining order, and policing resistance and protests, rather than legal categories dictated by Fourth and Fifth Amendment law. The book covers departmental policies and local and state law, as well as federal statutes and cases. It also addresses topics law students rarely study and on which there are few resources for lawyers and commentators, such as asset forfeiture, protest policing, video recording the police, and criminal investigations and prosecutions of police officers.

Even so, that hasnt stopped some professors who have given her book a test run from using it in their criminal procedure courses. Harmon said that the book was not conceived with that purpose in mind, but she has grown more comfortable with the idea that it can be used to teach an alternative version of criminal procedure, one in which the police are front and center.

Harmon is a member of the American Law Institute and serves as an associate reporter for ALIs project on Principles of the Law of Policing. She advises nonprofits and government actors on issues of policing and the law, and served as a policing expert for the independent review of the white supremacist events of Aug. 11-12, 2017, in Charlottesville, Virginia.

In December she co-authored a report, Policing Priorities for the New Administration, advocating for a stronger regulatory approach. The report, in collaboration with Barry Friedman and the Policing Project at the New York University School of Law, urged the White House to appoint a policing czar and require that all of the more than 80 federal law enforcement agencies meet basic standards for transparency, among other clear and actionable measures.

See the article here:
Harmon's New Casebook Is First to Look at Law of the Police - UVA Law

The precedent of free speech on campus | The Record – The Record

In 2017, a high school student (referred to as B.L.) expressed her frustration with having not made the varsity cheerleading team through a private Snapchat post. The image showed her making an obscene gesture and was captioned, f- school f- softball f- cheer f- everything.

A friend saved the snap and showed it to school authorities, resulting in B.L.s expulsion from the junior varsity team. She was reinstated to the team a few months later as litigation ensued.

Ultimately, the case reached a federal appeals court, which ruled in the students favor on the grounds that the school districts punishment violated the First Amendment; however obscene it may have been, the snap was between friends, off campus and outside of school grounds.

But this was not the end of the story. Mahanoy Area School District appealed the decision to the Supreme Court, which heard arguments in January.

The justices should affirm the lower courts decision in favor of free speech for high school and college students, especially off-campus. Moreover, there is a need to clarify those protections in the modern social media landscape.

First of all, there is a judicial precedent to take into consideration: that of the case of Tinker v. Des Moines in 1969. The ruling of that case was that unless it threatened to disrupt the academic environment, freedom of expression could not be infringed upon on school grounds. If schools have less power over expression on-campus, what, then, gives them power to punish students for things they said off-campus?

B.L.s speech did not fit the criteria established by Tinker v. Des Moines, as there was no call to disrupt academic activity. Rather, she was momentarily expressing her frustration in a temporary post.

Moreover, B.L. expressed herself in private, which ought to be considered outside of the school districts jurisdiction. Not only did the district infringe on her First Amendment rights to freedom of speech and expression, but also her Fourth Amendment right to privacy.

Now, a right to privacy is not explicitly written in the Constitution, but it is implied: The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated Because her speech was non-disruptive, it was not reasonable for B.L. to be punished for a statement she made in a private circle.

Although ruling in favor of rights to privacy and free speech and expression is the higher road for the Supreme Court to take, Mahanoy Area School Districts concerns must be taken into account. Officials there worry that if they have no jurisdiction over what is said by students off-campus, they will be unable to intervene in cases of cyberbullying and other such behavior outside of school.

Even taking that concern into account, the Supreme Court should rule in favor of First Amendment rights for students off-campus and their privacy. They should also uphold Tinker v. Des Moines with an additional provision for social media that being that it lies outside of school district authority with the exceptions of the use of school-owned handles and speech that disrupts academics or threatens or intimidates faculty, staff, or other students. Only in such exceptional cases should schools have jurisdiction over speech.

First Amendment rights are crucial to a students ability to communicate their thoughts and ideas with their peers and superiors. To quote the majority opinion in Tinker v. Des Moines, students do not shed their constitutional rights to freedom of speech or expression at the schoolhouse gate and certainly not outside of it.

Read more from the original source:
The precedent of free speech on campus | The Record - The Record

Are quantum computers good at picking stocks? This project tried to find out – ZDNet

The researchers ran a model for portfolio optimization on Canadian company D-Wave's 2,000-qubit quantum annealing processor.

Consultancy firm KPMG, together with a team of researchers from the Technical University of Denmark (DTU) and a yet-to-be-named European bank, has been piloting the use of quantum computing to determine which stocks to buy and sell for maximum return, an age-old banking operation known as portfolio optimization.

The researchers ran a model for portfolio optimization on Canadian company D-Wave's 2,000-qubit quantum annealing processor, comparing the results to those obtained with classical means. They foundthat the quantum annealer performed better and faster than other methods, while being capable of resolving larger problems although the study also indicated that D-Wave's technology still comes with some issues to do with ease of programming and scalability.

The smart distribution of portfolio assets is a problem that stands at the very heart of banking. Theorized by economist Harry Markowitz as early as 1952, it consists of allocating a fixed budget to a collection of financial assets in a way that will produce as much return as possible over time. In other words, it is an optimization problem: an investor should look to maximize gain and minimize risk for a given financial portfolio.

SEE: Hiring Kit: Computer Hardware Engineer (TechRepublic Premium)

As the number of assets in the portfolio multiplies, the difficulty of the calculation exponentially increases, and the problem can quickly become intractable, even to the world's largest supercomputers. Quantum computing, on the other hand, offers the possibility of running multiple calculations at once thanks to a special quantum state that is adopted by quantum bits, or qubits.

Quantum systems, for now, cannot support enough qubits to have a real-world impact. But in principle, large-scale quantum computers could one day solve complex portfolio optimization problems in a matter of minutes which is why the world's largest banks are already putting their research team to work on developing quantum algorithms.

To translate Markowitz's classical model for the portfolio selection problem into a quantum algorithm, the DTU's researchers formulated the equation into a quantum model called a quadratic unconstrained binary optimization (QUBO) problem, which they based on the usual criteria used for the operation such as budget and expected return.

When deciding which quantum hardware to pick to test their model, the team was faced with a number of options: IBM and Google are both working on a superconducting quantum computer, while Honeywell and IonQ are building trapped-ion devices; Xanadu is looking at photonic quantum technologies, and Microsoft is creating a topological quantum system.

D-Wave's quantum annealing processor is yet another approach to quantum computing. Unlike other systems, which are gate-based quantum computers, it is not possible to control the qubits in a quantum annealer; instead, D-Wave's technology consists of manipulating the environment surrounding the system, and letting the device find a "ground state". In this case, the ground state corresponds to the most optimal portfolio selection.

This approach, while limiting the scope of the problems that can be resolved by a quantum annealer, also enable D-Wave to work with many more qubits than other devices. The company's latest devicecounts 5,000 qubits, while IBM's quantum computer, for example, supports less than 100 qubits.

The researchers explained that the maturity of D-Wave's technology prompted them to pick quantum annealing to trial the algorithm; and equipped with the processor, they were able to embed and run the problem for up to 65 assets.

To benchmark the performance of the processor, they also ran the Markowitz equation with classical means, called brute force. With the computational resources at their disposal, brute force could only be used for up to 25 assets, after which the problem became intractable for the method.

Comparing between the two methods, the scientists found that the quality of the results provided by D-Wave's processor was equal to that delivered by brute force proving that quantum annealing can reliably be used to solve the problem. In addition, as the number of assets grew, the quantum processor overtook brute force as the fastest method.

From 15 assets onwards, D-Wave's processor effectively started showing significant speed-up over brute force, as the problem got closer to becoming intractable for the classical computer.

To benchmark the performance of the quantum annealer for more than 25 assets which is beyond the capability of brute force the researchers compared the results obtained with D-Wave's processor to those obtained with a method called simulated annealing. There again, shows the study, the quantum processor provided high-quality results.

Although the experiment suggests that quantum annealing might show a computational advantage over classical devices, therefore, Ulrich Busk Hoff, researcher at DTU, who participated in the research, warns against hasty conclusions.

"For small-sized problems, the D-Wave quantum annealer is indeed competitive, as it offers a speed-up and solutions of high quality," he tells ZDNet. "That said, I believe that the study is premature for making any claims about an actual quantum advantage, and I would refrain from doing that. That would require a more rigorous comparison between D-Wave and classical methods and using the best possible classical computational resources, which was far beyond the scope of the project."

DTU's team also flagged some scalability issues, highlighting that as the portfolio size increased, there was a need to fine-tune the quantum model's parameters in order to prevent a drop in results quality. "As the portfolio size was increased, a degradation in the quality of the solutions found by quantum annealing was indeed observed," says Hoff. "But after optimization, the solutions were still competitive and were more often than not able to beat simulated annealing."

SEE: The EU wants to build its first quantum computer. That plan might not be ambitious enough

In addition, with the quantum industry still largely in its infancy, the researchers pointed to the technical difficulties that still come with using quantum technologies. Implementing quantum models, they explained, requires a new way of thinking; translating classical problems into quantum algorithms is not straightforward, and even D-Wave's fairly accessible software development kit cannot be described yet as "plug-and-play".

The Canadian company's quantum processor nevertheless shows a lot of promise for solving problems such as portfolio optimization. Although the researchers shared doubts that quantum annealing would have as much of an impact as large-scale gate-based quantum computers, they pledged to continue to explore the capabilities of the technology in other fields.

"I think it's fair to say that D-Wave is a competitive candidate for solving this type of problem and it is certainly worthwhile further investigation," says Hoff.

KPMG, DTU's researchers and large banks are far from alone in experimenting with D-Wave's technology for near-term applications of quantum computing. For example, researchers from pharmaceutical company GlaxoSmithKline (GSK) recently trialed the use of different quantum methods to sequence gene expression, and found that quantum annealingcould already compete against classical computersto start addressing life-sized problems.

Continued here:
Are quantum computers good at picking stocks? This project tried to find out - ZDNet

The key to making AI green is quantum computing – The Next Web

Weve painted ourselves into another corner with artificial intelligence. Were finally starting to breakthrough the usefulness barrier but were butting up against the limits of our our ability to responsibly meet our machines massive energy requirements.

At the current rate of growth, it appears well have to turn Earth into Coruscant if we want to keep spending unfathomable amounts of energy training systems such as GPT-3 .

The problem: Simply put, AI takes too much time and energy to train. A layperson might imagine a bunch of code on a laptop screen when they think about AI development, but the truth is that many of the systems we use today were trained on massive GPU networks, supercomputers, or both. Were talking incredible amounts of power. And, worse, it takes a long time to train AI.

The reason AI is so good at the things its good at, such as image recognition or natural language processing, is because it basically just does the same thing over and over again, making tiny changes each time, until it gets things right. But were not talking about running a few simulations. It can take hundreds or even thousands of hours to train up a robust AI system.

One expert estimated that GPT-3, a natural language processing system created by OpenAI, would cost about $4.6 million to train. But that assumes one-shot training. And very, very few powerful AI systems are trained in one fell swoop. Realistically, the total expenses involved in getting GPT-3 to spit out impressively coherent gibberish are probably in the hundreds-of-millions.

GPT-3 is among the high-end abusers, but there are countless AI systems out there sucking up hugely disproportionate amounts of energy when compared to standard computation models.

The problem? If AI is the future, under the current power-sucking paradigm, the future wont be green. And that may mean we simply wont have a future.

The solution: Quantum computing.

An international team of researchers, including scientists from the University of Vienna, MIT, Austria, and New York, recentlypublishedresearch demonstrating quantum speed-up in a hybrid artificial intelligence system.

In other words: they managed to exploit quantum mechanics in order to allow AI to find more than one solution at the same time. This, of course, speeds up the training process.

Per the teams paper:

The crucial question for practical applications is how fast agents learn. Although various studies have made use of quantum mechanics to speed up the agents decision-making process, a reduction in learning time has not yet been demonstrated.

Here we present a reinforcement learning experiment in which the learning process of an agent is sped up by using a quantum communication channel with the environment. We further show that combining this scenario with classical communication enables the evaluation of this improvement and allows optimal control of the learning progress.

How?

This is the cool part. They ran 10,000 models through 165 experiments to determine how they functioned using classical AI and how they functioned when augmented with special quantum chips.

And by special, that is to say, you know how classical CPUs process via manipulation of electricity? The quantum chips the team used were nanophotonic, meaning they use light instead of electricity.

The gist of the operation is that in circumstance where classical AI bogs down solving very difficult problems (think: supercomputer problems), they found thehybrid-quantum system outperformed standard models.

Interestingly, when presented with less difficult challenges, the researchers didnt not observe anyperformance boost. Seems like you need to get it into fifth gear before you kick in the quantum turbocharger.

Theres still a lot to be done before we can roll out the old mission accomplished banner. The teams work wasnt the solution were eventually aiming for, but more of a small-scale model of how it could work once we figure out how to apply their techniques to larger, real problems.

You can read the whole paper here on Nature.

H/t: Shelly Fan, Singularity Hub

Published March 17, 2021 19:41 UTC

See the article here:
The key to making AI green is quantum computing - The Next Web

Quantum computing is finally having something of a moment – World Finance

Author: David Orrell, Author and Economist

March 16, 2021

In 2019, Google announced that they had achieved quantum supremacy by showing they could run a particular task much faster on their quantum device than on any classical computer. Research teams around the world are competing to find the first real-world applications and finance is at the very top of this list.

However, quantum computing may do more than change the way that quantitative analysts run their algorithms. It may also profoundly alter our perception of the financial system, and the economy in general. The reason for this is that classical and quantum computers handle probability in a different way.

The quantum coinIn classical probability, a statement can be either true or false, but not both at the same time. In mathematics-speak, the rule for determining the size of some quantity is called the norm. In classical probability, the norm, denoted the 1-norm, is just the magnitude. If the probability is 0.5, then that is the size.

The next-simplest norm, known as the 2-norm, works for a pair of numbers, and is the square root of the sum of squares. The 2-norm therefore corresponds to the distance between two points on a 2-dimensional plane, instead of a 1-dimensional line, hence the name. Since mathematicians love to extend a theory, a natural question to ask is what rules for probability would look like if they were based on this 2-norm.

It is only in the final step, when we take the magnitude into account, that negative probabilities are forced to become positive

For one thing, we could denote the state of something like a coin toss by a 2-D diagonal ray of length 1. The probability of heads is given by the square of the horizontal extent, while the probability of tails is given by the square of the vertical extent. By the Pythagorean theorem, the sum of these two numbers equals 1, as expected for a probability. If the coin is perfectly balanced, then the line should be at 45 degrees, so the chances of getting a heads or tails are identical. When we toss the coin and observe the outcome, the ambiguous state collapses to either heads or tails.

Because the norm of a quantum probability depends on the square, one could also imagine cases where the probabilities were negative. In classical probability, negative probabilities dont make sense: if a forecaster announced a negative 30 percent chance of rain tomorrow, we would think they were crazy. However, in a 2-norm, there is nothing to prevent negative probabilities occurring. It is only in the final step, when we take the magnitude into account, that negative probabilities are forced to become positive. If were going to allow negative numbers, then for mathematical consistency we should also permit complex numbers, which involve the square root of negative one. Now its possible well end up with a complex number for a probability; however the 2-norm of a complex number is a positive number (or zero). To summarise, classical probability is the simplest kind of probability, which is based on the 1-norm and involves positive numbers. The next-simplest kind of probability uses the 2-norm, and includes complex numbers. This kind of probability is called quantum probability.

Quantum logicIn a classical computer, a bit can take the value of 0 or 1. In a quantum computer, the state is represented by a qubit, which in mathematical terms describes a ray of length 1. Only when the qubit is measured does it give a 0 or 1. But prior to measurement, a quantum computer can work in the superposed state, which is what makes them so powerful.

So what does this have to do with finance? Well, it turns out that quantum algorithms behave in a very different way from their classical counterparts. For example, many of the algorithms used by quantitative analysts are based on the concept of a random walk. This assumes that the price of an asset such as a stock varies in a random way, taking a random step up or down at each time step. It turns out that the magnitude of the expected change increases with the square-root of time.

Quantum computing has its own version of the random walk, which is known as the quantum walk. One difference is the expected magnitude of change, which grows much faster (linearly with time). This feature matches the way that most people think about financial markets. After all, if we think a stock will go up by eight percent in a year then we will probably extend that into the future as well, so the next year it will grow by another eight percent. We dont think in square-roots.

This is just one way in which quantum models seem a better fit to human thought processes than classical ones. The field of quantum cognition shows that many of what behavioural economists call paradoxes of human decision-making actually make perfect sense when we switch to quantum probability. Once quantum computers become established in finance, expect quantum algorithms to get more attention, not for their ability to improve processing times, but because they are a better match for human behaviour.

Link:
Quantum computing is finally having something of a moment - World Finance