Archive for December, 2019

As the Year Closes: Appreciating One’s Political Opponents – Merion West

(Alex Edelman/CNP/Zuma Press/TNS)

In this spirit I would like to thank all the many individuals who have written or commented on my writing the last year, especially those who have offered sincere and interesting criticisms that have helped me develop my understanding of the world.

But if you have an enemy, do not requite him evil with good, for that would put him to shame. Rather prove that he did you some good. And rather be angry than put to shame. And if you are cursed, I do not like that you want to bless. Rather join a little in the cursing.

Nietzsche, Thus Spoke Zarathustra

The past year has been a gratifying one for me, seeing the publication of my first two books and taking the leap into married life. As I will be taking several weeks off through the holidays, I wanted to present a small article expressing gratitude towards my various interlocutors and intellectual opponents atMerion West and elsewhere. Through the year I engaged in edifying dialogues with writers such as Henry George andSamuel Kronen, while also responding to various critics who have put forward arguments against my positions. Many of these dialogues were highly engaging. Some were just eye-roll inducing. But each encouraged me to think more carefully about the relationship between an author and his or her critics and opponents. This has obvious bearing in our strained political climate, where increasing polarization and the banalization of post-modern culture can make it hard to appreciate those who feel differently than we do.

This is not to say that everyone who holds an opinion worth criticizing is saying something of value. Nazis, racists, and so on may all hold views which should be criticized, but they are not saying much that contributes to the public discourse or adds anything to the world. Their reactionary impulse is to simply negate anything that betters the lives of those they resent. Such individuals can be understood and criticized, but the only thing we can truly learn from them is the root causes of such a distorted mindset. Fortunately, many (lets hope most!) of the people in our society are not creatures of resentment and anger but, rather, morally-committed citizens with strong opinions and aspirations as sincere as our own. Perhaps they are mistaken. Perhaps we are. But there is much to be learnt from hearing their perspectives and trying to incorporate their insights into our own worldview.

As a progressive, I think there is much that can be learnt from examining the classics of religious and conservative thinking. It can make one less driven by purely materialist analysis. It can also make one more willing to take seriously the questions of meaning and community addressed by figures like Jordan Peterson, and it can facilitate an appreciation for continuity with the past. The first set of questions on the nature of meaning are especially pertinent, and this is an area where I think too many contemporary progressives ignore questions of existential meaning in favor of more down-to-earth material concerns. Adorno, Sartre, and De Beauvoir were certainly not guilty of such myopia, and it is important to return to their ambitions. I think much the same is true on the other end of the spectrum. Too many conservative thinkers are guilty of transforming the past into an idol, while neglecting the lived experiences and injustices of the present because dealing with them would involve asking tough questions about the legacy of our communities and their myriad sins. Conservatives could also gain much by looking at the works of radical thinkers, from Marxs analysis of how capitalism undermines traditional ways of life to Wendy Browns pioneering analyses of the psychology of identity politics in the wounded attachments of marginalized individuals, as well as sincere experiences of victimization. Such engagements with the other side are unlikely to produce sudden conversions, but they may help make politics less of a bitter and one-sided conflict, driven by the myopic hooligans Jason Brennan criticizes in Against Democracy.

In this spirit I would like to thank all the many individuals who have written or commented on my writing the last year, especially those who have offered sincere and interesting criticisms that have helped me develop my understanding of the world. It may be that we will never reach a consensus on the correct approach to some of the most pressing problems, though the challenge itself has its rewards. But the opportunity to learn and grow is one I appreciate and hope I offered in return.

Matt McManus is Professor of Politics and International Relations at Tec de Monterrey, and the author of Making Human Dignity Central to International Human Rights Law and The Rise of Post-Modern Conservatism. His new projects include co-authoring a critical monograph on Jordan Peterson and a book on liberal rights for Palgrave MacMillan. Matt can be reached atmattmcmanus300@gmail.comor added on twitter vie@mattpolprof

The rest is here:

As the Year Closes: Appreciating One's Political Opponents - Merion West

Wentz, Eagles keep NFC East hopes alive by beating Redskins – WITN

LANDOVER, Md. (AP) Carson Wentz recovered from a disastrous fumble by leading a 75-yard, go-ahead scoring drive and throwing his third touchdown pass of the day to keep the Philadelphia Eagles NFC East hopes on track with a 37-27 victory at the Washington Redskins on Sunday.

Wentz threw TD passes to running back Miles Sanders, tight end Zach Ertz and receiver Greg Ward and was 30 of 43 for 266 yards. The 4-yard pass from Wentz to Ward with 26 seconds left put Philadelphia up for good and electrified a stadium full of green-clad Eagles fans.

Wentzs ability to bounce back from some accuracy issues and a turnover means the Eagles (7-7) are still in the thick of the division race with a game against the division rival Dallas Cowboys coming next week.

Of course, Wentz didnt do it by himself. Sanders rushed for 122 yards and a touchdown and caught six passes for 50 yards. The Eagles defense that struggled to stop Washingtons Dwayne Haskins most of the afternoon got him to fumble for a touchdown by Nigel Bradham on the games final play.

Coming off an overtime victory against Eli Manning and the New York Giants, a loss to Washington (3-11) couldve had the Eagles facing elimination next week. Allowing an early 75-yard TD pass from Haskins to Terry McLaurin and falling behind 7-3, 14-10, 21-17 and 27-24 made that a distinct possibility.

Instead, Wentz was able to work some magic when it mattered most.

PETERSON TIES PAYTON

Adrian Petersons 10-yard touchdown early in the fourth quarter gave him 110 for his career and tied him with Walter Payton for fourth on the all-time list. Peterson had 16 carries for 66 yards.

MEYER SIGHTING

Free agent coach Urban Meyer took in the game from Redskins owner Dan Snyders box and watched parts of it with a familiar face from his college past. Meyer at one point could be seen talking and laughing with injured Redskins quarterback Alex Smith, who he coached at Utah.

Meyer has connections to several Redskins players, including Florida products Jordan Reed and Jon Bostic and, of course, Haskins and McLaurin.

INJURIES

Eagles: Played without WR Nelson Agholor (knee), RB Jordan Howard (shoulder), RT Lane Johnson (ankle) and DE Derek Barnett (ankle), and put WR Alshon Jeffrey (foot) on injured reserve.

Redskins: Rookie CB Jimmy Moreland left in the third quarter with a foot injury. ... CB Aaron Colvin was injured early in the fourth and CB Fabian Moreau left with a hamstring injury in the final minutes. ... Were without WR Trey Quinn (concussion), RG Brandon Scherff (elbow/shoulder) and CB Quinton Dunbar (hamstring), and put RB Derrius Guice (knee), WR Paul Richardson (hamstring) and LB Ryan Kerrigan (calf) on IR.

UP NEXT

Eagles: host the Cowboys in what could be a crucial game to decide who wins the division.

Redskins: host the New York Giants in either another Eli Manning swan song game or a showdown between Haskins and Daniel Jones.

Read this article:

Wentz, Eagles keep NFC East hopes alive by beating Redskins - WITN

The only campus watchdog – The Signal

Imagine you are hired by a news company to mediate between angry readers and an understaffed newsroom. It is not your job to take sides you are strictly focused on establishing trust between readers and reporters. At times you will need to define who is right and who is wrong, holding those accountable to their mistakes. This is the job of the public editor.

There are 43 university newspapers in Canada; the Varsity is the only one with a public editor. Published weekly through print and online, the Varsity circulates 18,000 copies across all University of Toronto campuses, serving 50,000 undergraduate and 17,000 graduate students.

Morag McGreevey, the Varsitys public editor from October 2018 until May 2019, accepted the challenge with confidence. Working as a freelance business journalist before her time as public editor, McGreeveys journalism experience helped her appreciate the importance of defending a publications integrity.

McGreevey says that in order for university newspapers to be taken seriously, editorial staff need to actively listen to readers concerns. Oftentimes the campus newspaper is the only source of news about whats happening on campus, McGreevey says.

While most public editors interact with the newsroom after a complaint is received, McGreevey met with the Varsity staff on a weekly basis. Knowing about each story before publication helped her gain an early idea of which stories readers might criticize.

Meeting often helped her establish open discussions between the paper and its readers. I would present reader concerns and explain why I thought they were legitimate, McGreevey says. Sometimes the editorial board would agree with me right off the bat, and other times it would be a more challenging conversation.

Public editors act as watchdogs, and open dialogue between readers and journalists. In an age where readers trust in the media is on the decline, only a few newsrooms in Canada operate with a public editor: Kathy English of the Toronto Star, Sylvia Stead of the Globe and Mail, and Jack Nagler of CBC and Radio-Canada.

In 2017, the Varsity became the first university paper in Canada to operate with a public editor.

Alex McKeen, the Varsitys editor-in-chief from 2016 to 2017, led her reporters through a storm of reader hostility. The criticism began during coverage of Jordan Petersons controversial declarations about sexual identity.

In 2017, Peterson released a two-hour conversation with Camille Paglia, a feminist academic, social critic, and author of the New York Times bestseller Sex, Art and American Culture. During their discussion, which was posted on YouTube, Peterson expressed concern with the way men and women approach confrontation with each other.

The Varsity received heavy criticism from its audience after headlining a story Jordan Peterson: I dont think that men can control crazy women. The secondary headline, too, was controversial: U of T psychology prof says hes defenceless against female insanity. Readers flooded the comment section with frustration.

McKeen said, It became really apparent to me that it would be helpful to have someone other than myself look at the complaints coming in with an objective eye.

Unsure as to how the Varsity would operate with a public editor, McKeen asked Kathy English, the public editor at the Toronto Star, for advice. English recommended that the public editor establish a strong relationship with both the newsroom and readers.

English says, when Alex was the editor, we met over lunch and she told me she was thinking of starting this public editor role. I just applauded her. I thought it was just amazing.

In 1967, the Louisville Courier-Journal and the Louisville Times were the first newspapers in the United States to appoint a public editor. The Toronto Star would follow, becoming the first Canadian paper to hire a public editor, in 1972. The concept was in place much earlier in Japan. The Asahi Shimbun in Tokyo in 1922 established a committee to receive and investigate reader complaints.

Today, public editors are employed in 24 countries spanning five continents. The standards and practices of these public editors is overseen by the Organization of News Ombudsmen (ONO). Founded in 1980, ONO is an international non-profit organization designed to promote accurate and fair reporting. While names differ depending on the newsroom, titles such as readers representative, ombudsman, and public editor maintain the same role, taking the publics concerns to the newsroom and then reporting back.

In 2017 the New York Times ended the public editor position. In a memo to readers, Times publisher Arthur Sulzberger, Jr. explained that the role of public editor outgrew the position. When our audience has questions or concerns, whether about current events or our coverage decisions, we must answer them ourselves, he said.

Jeffrey Dvorkin, a former CBC News journalist and executive, who was the ombudsman of National Public Radio in the U.S. and is now a U of T. lecturer, follows changes to public editor positions carefully. A former president of ONO, Dvorkin says publishers are making a mistake if they assume social media will replace the duty of the public editor.

Since 2013, the Washington Post and Wall Street Journal have also terminated the public editor position. The Columbia Journalism Review (CJR) is not letting those newsrooms off the hook. In June 2019, Kyle Pope, editor in chief of the CJR, announced the company will hire four journalists to serve as third party public editors, overseeing stories reported by the Times, MSNBC, CNN and the Post. In an interview with National Public Radio, Pope said readers are hungry for the truth. Audiences want news organizations to answer tough questions about the decisions they make.

The Digital News Report, conducted by the Reuters Institute for the Study of Journalism at Oxford University, in 2019 found readers trust in media is declining worldwide. According to the study, only 52 per cent of Canadians have trust in Canadian news.

We live in a world where we have a lot of skepticism about institutions, and the media is absolutely one of those institutions that generates a lot of mistrust, says Jack Nagler, English Services Ombudsman at CBC and Radio-Canada. Nagler says news organizations must be able to convince people that the news it shares is better than the competitions.

As a CBC Ombudsman, Nagler represents a voice for the people. He starts each day in his Toronto office going through complaints received through social media, email and by phone. The complaints of greatest concern are his top priority. His next step is research. Reading, listening and watching coverage of key stories on CBCs platforms allows Nagler the opportunity to gain a better understanding of the publics criticism.

In one recent complaint, he examined coverage of Quebecs Bill 21, where the National featured three citizens all critical of the bill. Bill 21 bans teachers, police officers, judges and many other public workers from wearing hijabs, turbans, crucifixes and other religious symbols in the course of their duties. Nagler concluded that while the National failed to provide balanced coverage on March 28, 2019, CBC opinion articles, podcasts and previous news stories considered both sides of the argument.

You cant do this job, Nagler says, without a very thick skin.

Journalism transparency is not as easy to achieve as some might think. Without open and unbiased discussions with readers, viewers and listeners, newsrooms risk losing audiences that feel ignored.

Since a public editor was introduced at the Varsity in 2017, former staff have gone on to establish their own careers, moving forward with lessons learned during their time as student journalists.

Looking back, McGreevey is happy to have played a role, guiding the Varsity staff to address issues readers wanted answered. Oftentimes, when you are a reader of news, you are reacting personally; you are reacting immediately, she says.

Since we are so busy, its not always a thoughtful response. I felt very privileged to have the opportunity to take that time and thoughtfully engage with all the parties involved, working through issues to solve the problem.

Have a story idea? Let us know

The rest is here:

The only campus watchdog - The Signal

Theres No Such Thing As The Machine Learning Platform – Forbes

In the past few years, you might have noticed the increasing pace at which vendors are rolling out platforms that serve the AI ecosystem, namely addressing data science and machine learning (ML) needs. The Data Science Platform and Machine Learning Platform are at the front lines of the battle for the mind share and wallets of data scientists, ML project managers, and others that manage AI projects and initiatives. If youre a major technology vendor and you dont have some sort of big play in the AI space, then you risk rapidly becoming irrelevant. But what exactly are these platforms and why is there such an intense market share grab going on?

The core of this insight is the realization that ML and data science projects are nothing like typical application or hardware development projects. Whereas in the past hardware and software development aimed to focus on the functionality of systems or applications, data science and ML projects are really about managing data, continuously evolving learning gleaned from data, and the evolution of data models based on constant iteration. Typical development processes and platforms simply dont work from a data-centric perspective.

It should be no surprise then that technology vendors of all sizes are focused on developing platforms that data scientists and ML project managers will depend on to develop, run, operate, and manage their ongoing data models for the enterprise. To these vendors, the ML platform of the future is like the operating system or cloud environment or mobile development platform of the past and present. If you can dominate market share for data science / ML platforms, you will reap rewards for decades to come. As a result, everyone with a dog in this fight is fighting to own a piece of this market.

However, what does a Machine Learning platform look like? How is it the same or different than a Data Science platform? What are the core requirements for ML Platforms, and how do they differ from more general data science platforms? Who are the users of these platforms, and what do they really want? Lets dive deeper.

What is the Data Science Platform?

Data scientists are tasked with wrangling useful information from a sea of data and translating business and operational informational needs into the language of data and math. Data scientists need to be masters of statistics, probability, mathematics, and algorithms that help to glean useful insights from huge piles of information. A data scientist creates data hypothesis, runs tests and analysis of the data, and then translates their results for someone else in the organization to easily view and understand. So it follows that a pure data science platform would meet the needs of helping craft data models, determining the best fit of information to a hypothesis, testing that hypothesis, facilitating collaboration amongst teams of data scientists, and helping to manage and evolve the data model as information continues to change.

Furthermore, data scientists dont focus their work in code-centric Integrated Development Environments (IDEs), but rather in notebooks. First popularized by academically-oriented math-centric platforms like Mathematica and Matlab, but now prominent in the Python, R, and SAS communities, notebooks are used to document data research and simplify reproducibility of results by allowing the notebook to run on different source data. The best notebooks are shared, collaborative environments where groups of data scientists can work together and iterate models over constantly evolving data sets. While notebooks dont make great environments for developing code, they make great environments to collaborate, explore, and visualize data. Indeed, the best notebooks are used by data scientists to quickly explore large data sets, assuming sufficient access to clean data.

However, data scientists cant perform their jobs effectively without access to large volumes of clean data. Extracting, cleaning, and moving data is not really the role of a data scientist, but rather that of a data engineer. Data engineers are challenged with the task of taking data from a wide range of systems in structured and unstructured formats, and data which is usually not clean, with missing fields, mismatched data types, and other data-related issues. In this way, the role of a data engineer is an engineer who designs, builds and arranges data. Good data science platforms also enable data scientists to easily leverage compute power as their needs grow. Instead of copying data sets to a local computer to work on them, platforms allow data scientists to easily access compute power and data sets with minimal hassle. A data science platform is challenged with the needs to provide these data engineering capabilities as well. As such, a practical data science platform will have elements of data science capabilities and necessary data engineering functionality.

What is the Machine Learning Platform?

We just spent several paragraphs talking about data science platforms and not even once mentioned AI or ML. Of course, the overlap is the use of data science techniques and machine learning algorithms applied to the large sets of data for the development of machine learning models. The tools that data scientists use on a daily basis have significant overlap with the tools used by ML-focused scientists and engineers. However, these tools arent the same, because the needs of ML scientists and engineers are not the same as more general data scientists and engineers.

Rather than just focusing on notebooks and the ecosystem to manage and work collaboratively with others on those notebooks, those tasked with managing ML projects need access to the range of ML-specific algorithms, libraries, and infrastructure to train those algorithms over large and evolving datasets. An ideal ML platforms helps ML engineers, data scientists, and engineers discover which machine learning approaches work best, how to tune hyperparameters, deploy compute-intensive ML training across on-premise or cloud-based CPU, GPU, and/or TPU clusters, and provide an ecosystem for managing and monitoring both unsupervised as well as supervised modes of training.

Clearly a collaborative, interactive, visual system for developing and managing ML models in a data science platform is necessary, but its not sufficient for an ML platform. As hinted above, one of the more challenging parts of making ML systems work is the setting and tuning of hyperparameters. The whole concept of a machine learning model is that it requires various parameters to be learned from the data. Basically, what machine learning is actually learning are the parameters of the data, and fitting new data to that learned model. Hyperparameters are configurable data values that are set prior to training an ML model that cant be learned from data. These hyperparameters indicate various factors such as complexity, speed of learning, and more. Different ML algorithms require different hyperparameters, and some dont need any at all. ML platforms help with the discovery, setting, and management of hyperparameters, among other things including algorithm selection and comparison that non-ML specific data science platforms dont provide.

The different needs of big data, ML engineering, model management, operationalization

At the end of the day, ML project managers simply want tools to make their jobs more efficient and effective. But not all ML projects are the same. Some are focused on conversational systems, while others are focused on recognition or predictive analytics. Yet others are focused on reinforcement learning or autonomous systems. Furthermore, these models can be deployed (or operationalized) in various different ways. Some models might reside in the cloud or on-premise servers while others are deployed to edge devices or offline batch modes. These differences in ML application, deployment, and needs between data scientists, engineers, and ML developers makes the concept of a single ML platform not particularly feasible. It would be a jack of all trades and master of none.''

As such, we see four different platforms emerging. One focused on the needs of data scientists and model builders, another focused on big data management and data engineering, yet another focused on model scaffolding and building systems to interact with models, and a fourth focused on managing the model lifecycle - ML Ops. The winners will focus on building out capabilities for each of these parts.

The Four Environments of AI (Source: Cognilytica)

The winners in the data science platform race will be the ones that simplify ML model creation, training, and iteration. They will make it quick and easy for companies to move from dumb unintelligent systems to ones that leverage the power of ML to solve problems that previously could not be addressed by machines. Data science platforms that dont enable ML capabilities will be relegated to non-ML data science tasks. Likewise, those big data platforms that inherently enable data engineering capabilities will be winners. Similarly, application development tools will need to treat machine learning models as first-class participants in their lifecycle just like any other form of technology asset. Finally, the space of ML operations (ML Ops) is just now emerging and will no doubt be big news in the next few years.

When a vendor tells you they have an AI or ML platform, the right response is to say which one?. As you can see, there isnt just one ML platform, but rather different ones that serve very different needs. Make sure you dont get caught up in the marketing hype of some of these vendors with what they say they have with what they actually have.

View original post here:

Theres No Such Thing As The Machine Learning Platform - Forbes

Machine learning results: pay attention to what you don’t see – STAT

Even as machine learning and artificial intelligence are drawing substantial attention in health care, overzealousness for these technologies has created an environment in which other critical aspects of the research are often overlooked.

Theres no question that the increasing availability of large data sources and off-the-shelf machine learning tools offer tremendous resources to researchers. Yet a lack of understanding about the limitations of both the data and the algorithms can lead to erroneous or unsupported conclusions.

Given that machine learning in the health domain can have a direct impact on peoples lives, broad claims emerging from this kind of research should not be embraced without serious vetting. Whether conducting health care research or reading about it, make sure to consider what you dont see in the data and analyses.

advertisement

One key question to ask is: Whose information is in the data and what do these data reflect?

Common forms of electronic health data, such as billing claims and clinical records, contain information only on individuals who have encounters with the health care system. But many individuals who are sick dont or cant see a doctor or other health care provider and so are invisible in these databases. This may be true for individuals with lower incomes or those who live in rural communities with rising hospital closures. As University of Toronto machine learning professor Marzyeh Ghassemi said earlier this year:

Even among patients who do visit their doctors, health conditions are not consistently recorded. Health data also reflect structural racism, which has devastating consequences.

Data from randomized trials are not immune to these issues. As a ProPublica report demonstrated, black and Native American patients are drastically underrepresented in cancer clinical trials. This is important to underscore given that randomized trials are frequently highlighted as superior in discussions about machine learning work that leverages nonrandomized electronic health data.

In interpreting results from machine learning research, its important to be aware that the patients in a study often do not depict the population we wish to make conclusions about and that the information collected is far from complete.

It has become commonplace to evaluate machine learning algorithms based on overall measures like accuracy or area under the curve. However, one evaluation metric cannot capture the complexity of performance. Be wary of research that claims to be ready for translation into clinical practice but only presents a leader board of tools that are ranked based on a single metric.

As an extreme illustration, an algorithm designed to predict a rare condition found in only 1% of the population can be extremely accurate by labeling all individuals as not having the condition. This tool is 99% accurate, but completely useless. Yet, it may outperform other algorithms if accuracy is considered in isolation.

Whats more, algorithms are frequently not evaluated based on multiple hold-out samples in cross-validation. Using only a single hold-out sample, which is done in many published papers, often leads to higher variance and misleading metric performance.

Beyond examining multiple overall metrics of performance for machine learning, we should also assess how tools perform in subgroups as a step toward avoiding bias and discrimination. For example, artificial intelligence-based facial recognition software performed poorly when analyzing darker-skinned women. Many measures of algorithmic fairness center on performance in subgroups.

Bias in algorithms has largely not been a focus in health care research. That needs to change. A new study found substantial racial bias against black patients in a commercial algorithm used by many hospitals and other health care systems. Other work developed algorithms to improve fairness for subgroups in health care spending formulas.

Subjective decision-making pervades research. Who decides what the research question will be, which methods will be applied to answering it, and how the techniques will be assessed all matter. Diverse teams are needed not just because they yield better results. As Rediet Abebe, a junior fellow of Harvards Society of Fellows, has written, In both private enterprise and the public sector, research must be reflective of the society were serving.

The influx of so-called digital data thats available through search engines and social media may be one resource for understanding the health of individuals who do not have encounters with the health care system. There have, however, been notable failures with these data. But there are also promising advances using online search queries at scale where traditional approaches like conducting surveys would be infeasible.

Increasingly granular data are now becoming available thanks to wearable technologies such as Fitbit trackers and Apple Watches. Researchers are actively developing and applying techniques to summarize the information gleaned from these devices for prevention efforts.

Much of the published clinical machine learning research, however, focuses on predicting outcomes or discovering patterns. Although machine learning for causal questions in health and biomedicine is a rapidly growing area, we dont see a lot of this work yet because it is new. Recent examples of it include the comparative effectiveness of feeding interventions in a pediatric intensive care unit and the effectiveness of different types of drug-eluting coronary artery stents.

Understanding how the data were collected and using appropriate evaluation metrics will also be crucial for studies that incorporate novel data sources and those attempting to establish causality.

In our drive to improve health with (and without) machine learning, we must not forget to look for what is missing: What information do we not have about the underlying health care system? Why might an individual or a code be unobserved? What subgroups have not been prioritized? Who is on the research team?

Giving these questions a place at the table will be the only way to see the whole picture.

Sherri Rose, Ph.D., is associate professor of health care policy at Harvard Medical School and co-author of the first book on machine learning for causal inference, Targeted Learning (Springer, 2011).

See the article here:

Machine learning results: pay attention to what you don't see - STAT