Media Search:



Cuyahoga County Probation Officer Hits Union with Federal Lawsuit for Years of Unconstitutional Dues Seizures – National Right to Work Foundation

Union officials took full union dues from nonmember officer without consent, then ignored requests to return illegally-seized money

Cleveland, OH (August 25, 2021) Cuyahoga County probation officer Kimberlee Warren is suing the Fraternal Order of Police (FOP) union in her workplace, charging union officials with breaching her First Amendment right as a public employee to refuse to support union activities. She is receiving free legal representation from National Right to Work Legal Defense Foundation staff attorneys, in partnership with attorneys with the Ohio-based Buckeye Institute.

Foundation staff attorneys contend that FOP union officials ignored her constitutional rights recognized in the landmark 2018 Janus v. AFSCME U.S. Supreme Court decision, which was argued and won by Right to Work Foundation staff attorneys.

In Janus, the Justices declared it a First Amendment violation to force any public sector employee to pay union dues or fees as a condition of keeping his or her job. The Court also ruled that public employers and unions cannot take union dues or fees from a public sector employee unless they obtain that employees affirmative consent.

The federal lawsuit says that Warren was not a member of the FOP union before the Janus decision in June 2018, but FOP union bosses collected union dues from her wages without her consent. According to the complaint, this continued until around December 2020, when Warren notified union officials that they were violating her First Amendment rights by taking the money and demanded that the union stop the coerced deductions and return all money that they had taken from her paycheck since the Janus decision.

When the deductions ended, FOP chiefs refused to give back the money that they had already seized from Warren in violation of her First Amendment rights. They claimed the deductions had appeared on her check stub and thus any responsibility to cease the deductions fell on her even though to her knowledge they had never obtained permission to opt her into membership or to take cash from her paycheck in the first place.

According to the lawsuit, Warren also asked FOP bosses to provide any dues deduction authorization document she might have signed. FOP officials rebuffed this request as well.

The High Court ruled in Janus that, because all activities public sector unions undertake involve lobbying the government and thus are political speech, forcing a public employee to pay any union dues or fees as a condition of keeping his or her job is forced political speech the First Amendment forbids.

Union bosses were permitted by state law before the Janus ruling to seize from nonmember workers paychecks only the part of dues they claimed went toward representational activities. FOP union officials took this amount from Warren prior to Janus. However, they furtively designated her as a member following the decision, and began taking full dues, deducting even more money from her wages than they did before Janus despite the complete lack of any consent.

Warren is now suing the FOP union in the U.S. District Court for the Northern District of Ohio. Her lawsuit seeks the return of all dues that FOP union officials garnished from her paycheck since the Janus decision was handed down. It also seeks punitive damages because FOP showed reckless, callous indifference toward her First Amendment rights by snubbing her refund requests.

Warrens lawsuit comes as other Foundation-backed lawsuits for employees defending their First Amendment Janus rights seek writs of certiorari from the Supreme Court. This includes cases brought for Chicago and New Jersey public educators which challenge window periods that severely limit when they and their fellow educators can exercise their First Amendment right to stop union dues deductions, sometimes to periods as short as ten days per year. In a California federal court, Foundation staff attorneys are also aiding a University of California Irvine lab assistant in fighting an anti-Janus state law that gives union bosses full control over whether employers can stop sending an employees money to the union after that employee exercises his or her Janus rights.

All over the country, union officials are stopping at nothing to ensure they can continue ignoring workers First Amendment Janus rights and continue siphoning money from the paychecks of dissenting employees, commented National Right to Work Foundation President Mark Mix. After Janus was handed down, FOP union officials in Warrens workplace could have come to her to attempt to get her to support the union voluntarily, but tellingly instead they began surreptitiously siphoning full dues out of her paycheck without her consent in direct contravention of the Supreme Court.

Despite her repeated requests, FOP bosses have continued to trample Warrens Janus rights, and Foundation staff attorneys are fighting to stop this gross injustice against her and punish FOP bosses for their brazen behavior, Mix added.

The National Right to Work Legal Defense Foundation is a nonprofit, charitable organization providing free legal aid to employees whose human or civil rights have been violated by compulsory unionism abuses. The Foundation, which can be contacted toll-free at 1-800-336-3600, assists thousands of employees in around 250 cases nationwide per year.

See the original post here:
Cuyahoga County Probation Officer Hits Union with Federal Lawsuit for Years of Unconstitutional Dues Seizures - National Right to Work Foundation

The Conundrum of the Separation of Church and State Divided We Fall – Divided We Fall

Religious Freedom: A Standard or an Enigma?

By Teresa Smallwood Postdoctoral Fellow & Associate Director, Public Theology and Racial Justice Collaborative

When the Danbury Baptist Association wrote President Thomas Jefferson on October 7, 1801 regarding their desire for the separation of church and state, they were advancing a position in favor of private, individualized faith expressed without governmental intervention. In 1801 the stakes could not have been higher because the establishment clause was only a decade old and there was a flood of enactments across the colonies to preserve their status as independent sovereigns. But how that standard would be implemented and enforced was a worrisome contention for many people of faith. Jeffersons reply on January 1, 1802 reverently acknowledged the separation and vowed that there would be a wall of separation between church and state, a phrase he borrowed from Roger Williams, a London minister who greatly influenced the colonies in favor of religious liberty.

In our present context culturally, socially, economically, and legally, I posit that the wall of separation is crumbling down. Despite groups like Americans United for the Separation of Church and State, founded to preserve the constitutional principle of church-state separation as the only way to ensure religious freedom for all Americans, it is impossible to see a true separation between church and state in the sense of the letters exchanged over two centuries ago. In fact, the notion of religious freedom or religious liberty is hard to discern.

In the same breath, the First Amendment to the US Constitution admonishes that Congress shall make no law respecting an establishment of religion and simultaneously it declares that everyone should have the right to freedom of religion. In effect, this is a conundrum when one considers the United States Supreme Courts decision in Masterpiece Cupcake Shop, LTD., et. al. v. Colorado Civil Rights Commission. At issue was the shop owners right to reject customers in light of his religious beliefs. He claimed his deeply held religious beliefs would not abide his making a wedding cake for a same-sex couple. The Supreme Court sided with the owner. Despite what I could say about the integrity of the decision, there is no way to avoid concluding that the US Supreme Court has been slowly eroding religious freedom to the point where the wall of separation is like Humpty Dumpty having a great fall.

For arguments sake, perhaps the fair thing to do is to advance the notion that marriage is held sacred by non-church-going people as well. Same-sex couples have religious beliefs. In fact, I would venture to say that people in covenant relationships who go to the lengths to repeat vows and celebrate with traditionally tiered wedding cakes do so in support of deeply held religious beliefs, whether they acknowledge a God concept or not. The Supreme Court never mentioned the fact that the analysis goes both ways.

Moreover, if that is the case, siding with one litigant over the other in terms of religious beliefs may look like establishment. It, however, points to a wider problemone that we as Americans, particularly people of color, must seriously consider: What happens when a case reaches the United States Supreme Court to decide whether the January 6, 2021 insurrection was employed and executed based upon deeply held religious beliefs?

Lets face it: Some of the mobsters carried Bibles in lockstep with other mobsters carrying nooses. Are we in danger of a backdoor approval of the return to chattel slavery based upon deeply held religious beliefs? The Apostle Paul did say slaves, obey your masters, did he not? The stacking of the Supreme Court with ultra-conservative jurists makes the question linger in the air.

Voter suppression, police brutality, mass incarceration, and economic disparities all point to a corrosion of basic democratic values not the least of which is religious freedom. Freedom from tyranny and freedom to exercise ones right to deeply held religious beliefs should not create a conflict so convoluted that the judiciary has to respect the establishment of someones religious belief as a means to an end while concomitantly abridging anothers right to the same freedom. Religious freedom should intimate a hands-off approach that the Supreme Court avows at all costs. That was the pledge Jefferson made. A wall of separation is a shield from contact, either literal or perceived. However, for decades the trend has been anything but hands-off.

Burwell v. Hobby Lobby, for example, is one case where the wall of separation is nowhere to be found. In a 5-4 decision, the US Supreme Court Justice Samuel A. Alito Jr. allowed a for-profit company to deny its employees health coverage for contraception based on the company owners religious beliefs. Religious objections aside, these employees would be entitled to these health benefits. The Religious Freedom Restoration Act was the operative legislation in this court opinion. The 1993 Act as applied to corporations creates a cyborg-ish effect. There is a danger that the inverse nature of religious freedom jurisprudence turns on itself in such a way that the freedom to practice ones religion trumps the scrutiny of every other discriminatory eventuality. The totalizing impact of this could reverse the gains Americans have made in a democracy that once valued religious freedom as much as it once valued the wall of separation. The enigmatic reality is that walls are overrated.

By Jeff Johnston Culture and Policy Analyst, The Daily Citizen

Chase Windebank was a senior at Pine Creek High School in Colorado Springs, Colorado. Beginning in his freshman year, he led a small group of students who wanted to pray for their school and the needs of fellow classmates during non-instructional time. One day, a school official called him in and told him the group could no longer meet because of the separation of church and state.

A year later, the school dropped its ban on student religious discussion and expression during free time, after Alliance Defending Freedom (ADF), a legal aid group advocating for First Amendment rights, filed a lawsuit against the district.

Think stories like this are unusual? Across the nation, from the schoolhouse to the military to the medical field, religious freedom is under fire. Houses of worship and ministries have felt the heat from those who work to eliminate religious expression from the public arena, often under the misguided banner of separation of church and state.

The largest legal organization in the U.S. solely devoted to defending religious liberty is First Liberty Institute. In its annual report, Undeniable: An Inside Look at the Cases, Controversies and Unrelenting Attacks on Religious Liberty in America, the organization lists more than 1,400 cases, mostly from the past 20 years, demonstrating the deep antipathy from many toward religion and people of faith.

Some of the cases are well known:

Others have received less publicity. A synagogue in Woodcliff Lake, New Jersey filed suit after the city took land from the congregation and blocked its efforts to relocate for ten years. The Equal Employment Opportunity Commission (EEOC) sued UPS for their policy banning drivers from having beards on behalf of Rastafarians, Muslims, and Sikhs whoms facial hair is part of their culture. A New York nurse was told she must participate in a late-term abortion, which was against her religious beliefs, and was threatened with termination and loss of her nursing license if she refused to do so.

Theres a reason that religious liberty is called our first freedom, and theres a reason people and religious legal aid groups continue to fight to preserve and protect it. Not only do the two clauses protecting religion from government incursion make up the first freedom listed in the bill of rights but freedom of religion is vital because it protects our deepest thoughts and beliefs as well as our expression of them in our daily lives.

Theres a huge misunderstanding that somehow the First Amendment places a wall of separation between church and state an unfortunate phrase used by Thomas Jefferson in a letter to the Danbury Baptist Association, in Connecticut. To deliberately mix metaphors, the wall of separation has been used as a sledgehammer, especially in recent years, against churches and people of faith.While some complain that the so-called wall of separation is crumbling, the truth is it has grown thicker and higher over the decades, threatening to crush our first freedom.

The phrase is not found in the Constitution, nor is it in the Bill of Rights. If the Founders had wanted to, they could easily have included a wall of separation. But as University of Chicagos Professor of Law Philip Hamburger argues in Separation of Church and State they strove to create something new: real religious liberty, without state overreach and control. They said that Congress could not establish a national church, nor could it prohibit the free exercise of religion.

And that free exercise of religion isnt just about private worship or individualized faith, it includes the freedom of individuals and different faiths to exercise belief and conviction in the public arena through their speech and actions.

While the phrase separation of church and state has become part of our common language, Hamburger explains how this erroneous idea grew and developed, replacing the First Amendment protection of religious liberty. As such, its seen by many to be a freedom from religion in the public square.

Hamburger writes, Yet the idea of separation of church and state was very different from the religious liberty desired by the religious dissenters whose demands shaped the First Amendment He adds that the simplistic metaphor of separation is opposed to the union of church and state, but that union and separation are over-generalizations between which lie much middle ground.

As opponents of religious freedom have tried to use the so-called wall to penalize bakers, florists, coaches, nurses and others, courts have, thankfully, begun pushing back against the complete removal of religion from public life. For example, the Supreme Court, in Masterpiece Cakeshop v. Colorado Civil Rights Commission, ruled in favor of Jack Phillips, saying the state showed animosity and discriminated against his convictions.

More recently, the Supreme Court has struck down onerous state government COVID decrees that shut down worship, treating churches less favorably than businesses, in cases such as Roman Catholic Diocese of Brooklyn v. Cuomo and Tandon v. Newsom. And in June 2021 the Court ruled 9-0, in Fulton v. Philadelphia, that the city had violated the First Amendment free exercise rights of Catholic Social Services, allowing them to continue placing children in loving homes with a mother and father.

People of faith have the right to share and live out our beliefs in the public arena. Even as assaults on religious liberty have accelerated, lets hope that courts continue to protect our cherished first freedom.

This article is part ofDivided We Falls Constitutional Questions series, covering a range of political topics fundamental to the U.S. Constitution and democratic institutions. Through this series, we ask constitutional scholars, journalists, elected officials, and activists to discuss how these ideals are and are not implemented today. If you want to read more pieces like this, clickhere.

Teresa Smallwood

Rev. Dr. Teresa L. Smallwood is a Postdoctoral Fellow and Associate Director of the Public Theology and Racial Justice Collaborative at Vanderbilt Divinity School. She is licensed and ordained to public ministry in the Baptist tradition and is presently an active member at New Covenant Christian Church in Nashville, TN where she serves as Social Justice Minister. She holds a BA degree from the University of North Carolina at Chapel Hill, a JD from North Carolina Central University School of Law, a Master of Divinity degree from Howard University, and a PhD degree from Chicago Theological Seminary.

Jeff Johnston

Jeff Johnston is Focus on the Familys culture and policy analyst for The Daily Citizen. He researches, writes, and speaks about education, marriage, LGBTQ issues, and healthy sexuality. After struggling for years to reconcile his faith with his same-sex attractions and sexual addiction, Johnston now shares his journey of healing and change through Gods transforming power. Johnston has been interviewed by top media outlets including CBS Sunday Morning, The New York Times, U.S. News and World Report, Rolling Stone, and more. He graduated from San Diego State University and lives in Colorado Springs with his wife and three sons.

See the article here:
The Conundrum of the Separation of Church and State Divided We Fall - Divided We Fall

Daphne Keller, "Amplification and Its Discontents: Why Regulating the Reach of Online Content Is Hard" – Reason

Still more from the free speech and social media platforms symposium in the first issue of ourJournal of Free Speech Law; you can read the whole article (by Daphne Keller, formerly at Google and now at Stanford)here, but here's the abstract [UPDATE: link fixed]:

Discussions about platform regulation increasingly focus on the "reach" or "amplification" that platforms provide for illegal or harmful content posted by users. Some have proposed holding platforms liable for amplified content, even if the platforms are immunized for simply hosting or transmitting the same content. This article discusses the serious challenges of that regulatory approach. It examines legal models that would (1) increase platform liability for amplifying currently illegal content, (2) increase platform liability for amplifying harmful but currently legal content, or (3) create content-neutral restraints on amplification. It suggests, using both U.S. First Amendment precedent and comparison to recent EU legal developments, that the first two approaches would raise serious concerns. It identifies potentially more viable ways forward, however, in content-neutral approaches grounded in privacy or competition law.

Read more here:
Daphne Keller, "Amplification and Its Discontents: Why Regulating the Reach of Online Content Is Hard" - Reason

A general reinforcement learning algorithm that masters …

One program to rule them all

Computers can beat humans at increasingly complex games, including chess and Go. However, these programs are typically constructed for a particular game, exploiting its properties, such as the symmetries of the board on which it is played. Silver et al. developed a program called AlphaZero, which taught itself to play Go, chess, and shogi (a Japanese version of chess) (see the Editorial, and the Perspective by Campbell). AlphaZero managed to beat state-of-the-art programs specializing in these three games. The ability of AlphaZero to adapt to various game rules is a notable step toward achieving a general game-playing system.

Science, this issue p. 1140; see also pp. 1087 and 1118

The game of chess is the longest-studied domain in the history of artificial intelligence. The strongest programs are based on a combination of sophisticated search techniques, domain-specific adaptations, and handcrafted evaluation functions that have been refined by human experts over several decades. By contrast, the AlphaGo Zero program recently achieved superhuman performance in the game of Go by reinforcement learning from self-play. In this paper, we generalize this approach into a single AlphaZero algorithm that can achieve superhuman performance in many challenging games. Starting from random play and given no domain knowledge except the game rules, AlphaZero convincingly defeated a world champion program in the games of chess and shogi (Japanese chess), as well as Go.

The study of computer chess is as old as computer science itself. Charles Babbage, Alan Turing, Claude Shannon, and John von Neumann devised hardware, algorithms, and theory to analyze and play the game of chess. Chess subsequently became a grand challenge task for a generation of artificial intelligence researchers, culminating in high-performance computer chess programs that play at a superhuman level (1, 2). However, these systems are highly tuned to their domain and cannot be generalized to other games without substantial human effort, whereas general game-playing systems (3, 4) remain comparatively weak.

A long-standing ambition of artificial intelligence has been to create programs that can instead learn for themselves from first principles (5, 6). Recently, the AlphaGo Zero algorithm achieved superhuman performance in the game of Go by representing Go knowledge with the use of deep convolutional neural networks (7, 8), trained solely by reinforcement learning from games of self-play (9). In this paper, we introduce AlphaZero, a more generic version of the AlphaGo Zero algorithm that accommodates, without special casing, a broader class of game rules. We apply AlphaZero to the games of chess and shogi, as well as Go, by using the same algorithm and network architecture for all three games. Our results demonstrate that a general-purpose reinforcement learning algorithm can learn, tabula rasawithout domain-specific human knowledge or data, as evidenced by the same algorithm succeeding in multiple domainssuperhuman performance across multiple challenging games.

A landmark for artificial intelligence was achieved in 1997 when Deep Blue defeated the human world chess champion (1). Computer chess programs continued to progress steadily beyond human level in the following two decades. These programs evaluate positions by using handcrafted features and carefully tuned weights, constructed by strong human players and programmers, combined with a high-performance alpha-beta search that expands a vast search tree by using a large number of clever heuristics and domain-specific adaptations. In (10) we describe these augmentations, focusing on the 2016 Top Chess Engine Championship (TCEC) season 9 world champion Stockfish (11); other strong chess programs, including Deep Blue, use very similar architectures (1, 12).

In terms of game tree complexity, shogi is a substantially harder game than chess (13, 14): It is played on a larger board with a wider variety of pieces; any captured opponent piece switches sides and may subsequently be dropped anywhere on the board. The strongest shogi programs, such as the 2017 Computer Shogi Association (CSA) world champion Elmo, have only recently defeated human champions (15). These programs use an algorithm similar to those used by computer chess programs, again based on a highly optimized alpha-beta search engine with many domain-specific adaptations.

AlphaZero replaces the handcrafted knowledge and domain-specific augmentations used in traditional game-playing programs with deep neural networks, a general-purpose reinforcement learning algorithm, and a general-purpose tree search algorithm.

Instead of a handcrafted evaluation function and move-ordering heuristics, AlphaZero uses a deep neural network (p, v) = f(s) with parameters . This neural network f(s) takes the board position s as an input and outputs a vector of move probabilities p with components pa = Pr(a|s) for each action a and a scalar value v estimating the expected outcome z of the game from position s, . AlphaZero learns these move probabilities and value estimates entirely from self-play; these are then used to guide its search in future games.

Instead of an alpha-beta search with domain-specific enhancements, AlphaZero uses a general-purpose Monte Carlo tree search (MCTS) algorithm. Each search consists of a series of simulated games of self-play that traverse a tree from root state sroot until a leaf state is reached. Each simulation proceeds by selecting in each state s a move a with low visit count (not previously frequently explored), high move probability, and high value (averaged over the leaf states of simulations that selected a from s) according to the current neural network f. The search returns a vector representing a probability distribution over moves, a = Pr(a|sroot).

The parameters of the deep neural network in AlphaZero are trained by reinforcement learning from self-play games, starting from randomly initialized parameters . Each game is played by running an MCTS from the current position sroot = st at turn t and then selecting a move, at ~ t, either proportionally (for exploration) or greedily (for exploitation) with respect to the visit counts at the root state. At the end of the game, the terminal position sT is scored according to the rules of the game to compute the game outcome z: 1 for a loss, 0 for a draw, and +1 for a win. The neural network parameters are updated to minimize the error between the predicted outcome vt and the game outcome z and to maximize the similarity of the policy vector pt to the search probabilities t. Specifically, the parameters are adjusted by gradient descent on a loss function l that sums over mean-squared error and cross-entropy losses(1)where c is a parameter controlling the level of L2 weight regularization. The updated parameters are used in subsequent games of self-play.

The AlphaZero algorithm described in this paper [see (10) for the pseudocode] differs from the original AlphaGo Zero algorithm in several respects.

AlphaGo Zero estimated and optimized the probability of winning, exploiting the fact that Go games have a binary win or loss outcome. However, both chess and shogi may end in drawn outcomes; it is believed that the optimal solution to chess is a draw (1618). AlphaZero instead estimates and optimizes the expected outcome.

The rules of Go are invariant to rotation and reflection. This fact was exploited in AlphaGo and AlphaGo Zero in two ways. First, training data were augmented by generating eight symmetries for each position. Second, during MCTS, board positions were transformed by using a randomly selected rotation or reflection before being evaluated by the neural network, so that the Monte Carlo evaluation was averaged over different biases. To accommodate a broader class of games, AlphaZero does not assume symmetry; the rules of chess and shogi are asymmetric (e.g., pawns only move forward, and castling is different on kingside and queenside). AlphaZero does not augment the training data and does not transform the board position during MCTS.

In AlphaGo Zero, self-play games were generated by the best player from all previous iterations. After each iteration of training, the performance of the new player was measured against the best player; if the new player won by a margin of 55%, then it replaced the best player. By contrast, AlphaZero simply maintains a single neural network that is updated continually rather than waiting for an iteration to complete. Self-play games are always generated by using the latest parameters for this neural network.

As in AlphaGo Zero, the board state is encoded by spatial planes based only on the basic rules for each game. The actions are encoded by either spatial planes or a flat vector, again based only on the basic rules for each game (10).

AlphaGo Zero used a convolutional neural network architecture that is particularly well-suited to Go: The rules of the game are translationally invariant (matching the weight-sharing structure of convolutional networks) and are defined in terms of liberties corresponding to the adjacencies between points on the board (matching the local structure of convolutional networks). By contrast, the rules of chess and shogi are position dependent (e.g., pawns may move two steps forward from the second rank and promote on the eighth rank) and include long-range interactions (e.g., the queen may traverse the board in one move). Despite these differences, AlphaZero uses the same convolutional network architecture as AlphaGo Zero for chess, shogi, and Go.

The hyperparameters of AlphaGo Zero were tuned by Bayesian optimization. In AlphaZero, we reuse the same hyperparameters, algorithm settings, and network architecture for all games without game-specific tuning. The only exceptions are the exploration noise and the learning rate schedule [see (10) for further details].

We trained separate instances of AlphaZero for chess, shogi, and Go. Training proceeded for 700,000 steps (in mini-batches of 4096 training positions) starting from randomly initialized parameters. During training only, 5000 first-generation tensor processing units (TPUs) (19) were used to generate self-play games, and 16 second-generation TPUs were used to train the neural networks. Training lasted for approximately 9 hours in chess, 12 hours in shogi, and 13 days in Go (see table S3) (20). Further details of the training procedure are provided in (10).

Figure 1 shows the performance of AlphaZero during self-play reinforcement learning, as a function of training steps, on an Elo (21) scale (22). In chess, AlphaZero first outperformed Stockfish after just 4 hours (300,000 steps); in shogi, AlphaZero first outperformed Elmo after 2 hours (110,000 steps); and in Go, AlphaZero first outperformed AlphaGo Lee (9) after 30 hours (74,000 steps). The training algorithm achieved similar performance in all independent runs (see fig. S3), suggesting that the high performance of AlphaZeros training algorithm is repeatable.

Elo ratings were computed from games between different players where each player was given 1 s per move. (A) Performance of AlphaZero in chess compared with the 2016 TCEC world champion program Stockfish. (B) Performance of AlphaZero in shogi compared with the 2017 CSA world champion program Elmo. (C) Performance of AlphaZero in Go compared with AlphaGo Lee and AlphaGo Zero (20 blocks over 3 days).

We evaluated the fully trained instances of AlphaZero against Stockfish, Elmo, and the previous version of AlphaGo Zero in chess, shogi, and Go, respectively. Each program was run on the hardware for which it was designed (23): Stockfish and Elmo used 44 central processing unit (CPU) cores (as in the TCEC world championship), whereas AlphaZero and AlphaGo Zero used a single machine with four first-generation TPUs and 44 CPU cores (24). The chess match was played against the 2016 TCEC (season 9) world champion Stockfish [see (10) for details]. The shogi match was played against the 2017 CSA world champion version of Elmo (10). The Go match was played against the previously published version of AlphaGo Zero [also trained for 700,000 steps (25)]. All matches were played by using time controls of 3 hours per game, plus an additional 15 s for each move.

In Go, AlphaZero defeated AlphaGo Zero (9), winning 61% of games. This demonstrates that a general approach can recover the performance of an algorithm that exploited board symmetries to generate eight times as much data (see fig. S1).

In chess, AlphaZero defeated Stockfish, winning 155 games and losing 6 games out of 1000 (Fig. 2). To verify the robustness of AlphaZero, we played additional matches that started from common human openings (Fig. 3). AlphaZero defeated Stockfish in each opening, suggesting that AlphaZero has mastered a wide spectrum of chess play. The frequency plots in Fig. 3 and the time line in fig. S2 show that common human openings were independently discovered and played frequently by AlphaZero during self-play training. We also played a match that started from the set of opening positions used in the 2016 TCEC world championship; AlphaZero won convincingly in this match, too (26) (fig. S4). We played additional matches against the most recent development version of Stockfish (27) and a variant of Stockfish that uses a strong opening book (28). AlphaZero won all matches by a large margin (Fig. 2).

(A) Tournament evaluation of AlphaZero in chess, shogi, and Go in matches against, respectively, Stockfish, Elmo, and the previously published version of AlphaGo Zero (AG0) that was trained for 3 days. In the top bar, AlphaZero plays white; in the bottom bar, AlphaZero plays black. Each bar shows the results from AlphaZeros perspective: win (W; green), draw (D; gray), or loss (L; red). (B) Scalability of AlphaZero with thinking time compared with Stockfish and Elmo. Stockfish and Elmo always receive full time (3 hours per game plus 15 s per move); time for AlphaZero is scaled down as indicated. (C) Extra evaluations of AlphaZero in chess against the most recent version of Stockfish at the time of writing (27) and against Stockfish with a strong opening book (28). Extra evaluations of AlphaZero in shogi were carried out against another strong shogi program, Aperyqhapaq (29), at full time controls and against Elmo under 2017 CSA world championship time controls (10 min per game and 10 s per move). (D) Average result of chess matches starting from different opening positions, either common human positions (see also Fig. 3) or the 2016 TCEC world championship opening positions (see also fig. S4), and average result of shogi matches starting from common human positions (see also Fig. 3). CSA world championship games start from the initial board position. Match conditions are summarized in tables S8 and S9.

AlphaZero plays against (A) Stockfish in chess and (B) Elmo in shogi. In the left bar, AlphaZero plays white, starting from the given position; in the right bar, AlphaZero plays black. Each bar shows the results from AlphaZeros perspective: win (green), draw (gray), or loss (red). The percentage frequency of self-play training games in which this opening was selected by AlphaZero is plotted against the duration of training, in hours.

Table S6 shows 20 chess games played by AlphaZero in its matches against Stockfish. In several games, AlphaZero sacrificed pieces for long-term strategic advantage, suggesting that it has a more fluid, context-dependent positional evaluation than the rule-based evaluations used by previous chess programs.

In shogi, AlphaZero defeated Elmo, winning 98.2% of games when playing black and 91.2% overall. We also played a match under the faster time controls used in the 2017 CSA world championship and against another state-of-the-art shogi program (29); AlphaZero again won both matches by a wide margin (Fig. 2).

Table S7 shows 10 shogi games played by AlphaZero in its matches against Elmo. The frequency plots in Fig. 3 and the time line in fig. S2 show that AlphaZero frequently plays one of the two most common human openings but rarely plays the second, deviating on the very first move.

AlphaZero searches just 60,000 positions per second in chess and shogi, compared with 60 million for Stockfish and 25 million for Elmo (table S4). AlphaZero may compensate for the lower number of evaluations by using its deep neural network to focus much more selectively on the most promising variations (Fig. 4 provides an example from the match against Stockfish)arguably a more humanlike approach to searching, as originally proposed by Shannon (30). AlphaZero also defeated Stockfish when given as much thinking time as its opponent (i.e., searching as many positions) and won 46% of games against Elmo when given as much time (i.e., searching as many positions) (Fig. 2). The high performance of AlphaZero with the use of MCTS calls into question the widely held belief (31, 32) that alpha-beta search is inherently superior in these domains.

The search is illustrated for a position (inset) from game 1 (table S6) between AlphaZero (white) and Stockfish (black) after 29. ... Qf8. The internal state of AlphaZeros MCTS is summarized after 102, ..., 106 simulations. Each summary shows the 10 most visited states. The estimated value is shown in each state, from whites perspective, scaled to the range [0, 100]. The visit count of each state, relative to the root state of that tree, is proportional to the thickness of the border circle. AlphaZero considers 30. c6 but eventually plays 30. d5.

The game of chess represented the pinnacle of artificial intelligence research over several decades. State-of-the-art programs are based on powerful engines that search many millions of positions, leveraging handcrafted domain expertise and sophisticated domain adaptations. AlphaZero is a generic reinforcement learning and search algorithmoriginally devised for the game of Gothat achieved superior results within a few hours, searching as many positions, given no domain knowledge except the rules of chess. Furthermore, the same algorithm was applied without modification to the more challenging game of shogi, again outperforming state-of-the-art programs within a few hours. These results bring us a step closer to fulfilling a longstanding ambition of artificial intelligence (3): a general game-playing system that can learn to master any game.

F.-H. Hsu, Behind Deep Blue: Building the Computer That Defeated the World Chess Champion (Princeton Univ., 2002).

C. J. Maddison, A. Huang, I. Sutskever, D. Silver, paper presented at the International Conference on Learning Representations 2015, San Diego, CA, 7 to 9 May 2015.

D. N. L. Levy, M. Newborn, How Computers Play Chess (Ishi Press, 2009).

V. Allis, Searching for solutions in games and artificial intelligence, Ph.D. thesis, Transnational University Limburg, Maastricht, Netherlands (1994).

W. Steinitz, The Modern Chess Instructor (Edition Olms, 1990).

E. Lasker, Common Sense in Chess (Dover Publications, 1965).

J. Knudsen, Essential Chess Quotations (iUniverse, 2000).

N. P. Jouppi, C. Young, N. Patil, D. Patterson, G. Agrawal, R. Bajwa, S. Bates, S. Bhatia, N. Boden, A. Borchers, R. Boyle, P. Cantin, C. Chao, C. Clark, J. Coriell, M. Daley, M. Dau, J. Dean, B. Gelb, T. V. Ghaemmaghami, R. Gottipati, W. Gulland, R. Hagmann, C. R. Ho, D. Hogberg, J. Hu, R. Hundt, D. Hurt, J. Ibarz, A. Jaffey, A. Jaworski, A. Kaplan, H. Khaitan, D. Killebrew, A. Koch, N. Kumar, S. Lacy, J. Laudon, J. Law, D. Le, C. Leary, Z. Liu, K. Lucke, A. Lundin, G. MacKean, A. Maggiore, M. Mahony, K. Miller, R. Nagarajan, R. Narayanaswami, R. Ni, K. Nix, T. Norrie, M. Omernick, N. Penukonda, A. Phelps, J. Ross, M. Ross, A. Salek, E. Samadiani, C. Severn, G. Sizikov, M. Snelham, J. Souter, D. Steinberg, A. Swing, M. Tan, G. Thorson, B. Tian, H. Toma, E. Tuttle, V. Vasudevan, R. Walter, W. Wang, E. Wilcox, D. H. Yoon, in Proceedings of the 44th Annual International Symposium on Computer Architecture, Toronto, Canada, 24 to 28 June 2017 (Association for Computing Machinery, 2017), pp. 112.

R. Coulom, in Proceedings of the Sixth International Conference on Computers and Games, Beijing, China, 29 September to 1 October 2008 (Springer, 2008), pp. 113124.

O. Arenz, Monte Carlo chess, masters thesis, Technische Universitt Darmstadt (2012).

O. E. David, N. S. Netanyahu, L. Wolf, in Artificial Neural Networks and Machine LearningICANN 2016, Part II, Barcelona, Spain, 6 to 9 September 2016 (Springer, 2016), pp. 8896.

T. Marsland, Encyclopedia of Artificial Intelligence, S. Shapiro, Ed. (Wiley, 1987).

T. Kaneko, K. Hoki, in Advances in Computer Games 13th International Conference, ACG 2011, Revised Selected Papers, Tilburg, Netherlands, 20 to 22 November 2011 (Springer, 2012), pp. 158169.

M. Lai, Giraffe: Using deep reinforcement learning to play chess, masters thesis, Imperial College London (2015).

R. Ramanujan, A. Sabharwal, B. Selman, in Proceedings of the 26th Conference on Uncertainty in Artificial Intelligence (UAI 2010), Catalina Island, CA, 8 to 11 July (AUAI Press, 2010).

K. He, X. Zhang, S. Ren, J. Sun, in Computer Vision ECCV 2016, 14th European Conference, Part IV, Amsterdam, Netherlands, 11 to 14 October 2016 (Springer, 2016), pp. 630645.

Acknowledgments: We thank M. Sadler for analyzing chess games; Y. Habu for analyzing shogi games; L. Bennett for organizational assistance; B. Konrad, E. Lockhart, and G. Ostrovski for reviewing the paper; and the rest of the DeepMind team for their support. Funding: All research described in this report was funded by DeepMind and Alphabet. Author contributions: D.S., J.S., T.H., and I.A. designed the AlphaZero algorithm with advice from T.G., A.G., T.L., K.S., M.Lai, L.S., and M.Lan.; J.S., I.A., T.H., and M.Lai implemented the AlphaZero program; T.H., J.S., D.S., M.Lai, I.A., T.G., K.S., D.K., and D.H. ran experiments and/or analyzed data; D.S., T.H., J.S., and D.H. managed the project; D.S., J.S., T.H., M.Lai, I.A., and D.H. wrote the paper. Competing interests: DeepMind has filed the following patent applications related to this work: PCT/EP2018/063869, US15/280,711, and US15/280,784. Data and materials availability: A full description of the algorithm in pseudocode as well as details of additional games between AlphaZero and other programs is available in the supplementary materials.

Follow this link:
A general reinforcement learning algorithm that masters ...

What would it be like to be a conscious AI? We might never know. – MIT Technology Review

Humans are active listeners; we create meaning where there is none, or none intended. It is not that the octopuss utterances make sense, but rather that the islander can make sense of them, Bender says.

For all their sophistication, todays AIs are intelligent in the same way a calculator might be said to be intelligent: they are both machines designed to convert input into output in ways that humanswho have mindschoose to interpret as meaningful. While neural networks may be loosely modeled on brains, the very best of them are vastly less complex than a mouses brain.

And yet, we know that brains can produce what we understand to be consciousness. If we can eventually figure out how brains do it, and reproduce that mechanism in an artificial device, then surely a conscious machine might be possible?

When I was trying to imagine Roberts world in the opening to this essay, I found myself drawn to the question of what consciousness means to me. My conception of a conscious machine was undeniablyperhaps unavoidablyhuman-like. It is the only form of consciousness I can imagine, as it is the only one I have experienced. But is that really what it would be like to be a conscious AI?

Its probably hubristic to think so. The project of building intelligent machines is biased toward human intelligence. But the animal world is filled with a vast range of possible alternatives, from birds to bees to cephalopods.

A few hundred years ago the accepted view, pushed by Ren Descartes, was that only humans were conscious. Animals, lacking souls, were seen as mindless robots. Few think that today: if we are conscious, then there is little reason not to believe that mammals, with their similar brains, are conscious too. And why draw the line around mammals? Birds appear to reflect when they solve puzzles. Most animals, even invertebrates like shrimp and lobsters, show signs of feeling pain, which would suggest they have some degree of subjective consciousness.

But how can we truly picture what that must feel like? As the philosopher Thomas Nagel noted, it must be like something to be a bat, but what that is we cannot even imaginebecause we cannot imagine what it would be like to observe the world through a kind of sonar. We can imagine what it might be like for us to do this (perhaps by closing our eyes and picturing a sort of echolocation point cloud of our surroundings), but thats still not what it must be like for a bat, with its bat mind.

View post:
What would it be like to be a conscious AI? We might never know. - MIT Technology Review