Media Search:



New study shows the potential of machine learning in the early … – Swansea University

A study by Swansea University has revealed how machine learning can help with the early detection of Ankylosing Spondylitis (AS) inflammatory arthritis and revolutionise how people are detected and diagnosed by their GPs.

Published in the open-access journal PLOS ONE, the study, funded by UCB Pharma and Health and Care Research Wales, has been carried out by data analysts and researchers from the National Centre for Population Health & Wellbeing Research (NCPHWR).

The team used machine learning methods to develop a profile of the characteristics of people likely to be diagnosed with AS, the second most common cause of inflammatory arthritis.

Machine learning, a type of artificial intelligence, is a method of data analysis that automates model building to improve performance and accuracy. Its algorithms build a model based on sample data to make predictions or decisions without being explicitly programmed to do so.

Using the Secure Anonymised Information Linkage (SAIL) Databank based atSwansea University Medical School, a national data repository allowing anonymised person-based data linkage across datasets, patients with AS were identified and matched with those with no record of a condition diagnosis.

The data was analysed separately for men and women, with a model developed using feature/variable selection and principal component analysis to build decision trees.

The findings revealed:

Dr Jonathan Kennedy, Data Lab Manager at NCPHWR and study lead:"Our study indicates the enormous potential machine learning has to help identify people with AS and better understand their diagnostic journeys through the health system.

"Early detection and diagnosis are crucial to secure the best outcomes for patients. Machine learning can help with this. In addition, it can empower GPs helping them detect and refer patients more effectively and efficiently.

"However, machine learning is in the early stages of implementation. To develop this, we need more detailed data to improve prediction and clinical utility."

Professor Ernest Choy, Researcher at NCPHWR and Head of Rheumatology and Translational Research at Cardiff University, added:"On average, it takes eight years for patients with AS from having symptoms to receiving a diagnosis and getting treatment. Machine learning may provide a useful tool to reduce this delay."

Professor Kieran Walshe, Director of Health and Care Research Wales, added: Its fantastic to see the cutting-edge role that machine learning can play in the early identification of patients with health conditions such as AS and the work being undertaken at the National Centre for Population Health and Wellbeing Research.

Though it is in its early stages, machine learning clearly has the potential to transform the way that researchers and clinicians approach the diagnostic journey, bringing about benefits to patients and their future health outcomes.

Read the full publication in the PLOS ONE journal.

View original post here:
New study shows the potential of machine learning in the early ... - Swansea University

Top 10 Court Cases That Changed the U.S. Justice System – Listverse

The United States justice system is full of landmark cases that have shaped how we understand and apply the law. Lets explore 10 of the most critical court cases that changed the justice system.

Related: Top 10 Times The US Government Took Inanimate Objects To Court

In 2015, Edward Caniglia had an argument with his wife, where he allegedly placed a gun on the table and told his wife to shoot him. Rather than comply, she left and called the police, asking them to complete a welfare check.

Caniglia was taken to the hospital for a psychiatric evaluation, stipulating that the authorities would not take his guns. But police entered Caniglias home without a warrant and seized his firearms.

Caniglia sued the police, arguing that the warrantless search and seizure violated his Fourth Amendment rights. The police claimed that they acted under the community caretaking exception to the Fourth Amendment, which allowed them to conduct a search and seizure for non-criminal purposes.

The Supreme Court ruled in favor of Caniglia, stating that the community caretaking exception did not apply to a private home and that the police violated his Fourth Amendment.

The Courts decision reaffirms the importance of Fourth Amendment protections for private homes and personal property, limiting the scope of the community caretaking exception and strengthening the requirement for police to obtain a warrant before entering a private home.

In 1961, Clarence Gideon was arrested and charged with breaking and entering. Gideon could not afford a lawyer and was ultimately convicted and sentenced to five years in prison.

Gideon petitioned the U.S. Supreme Court, arguing that his Sixth Amendment right to counsel had been violated. Gideon argued that he should have been provided with a lawyer, even though he could not afford one.

The Supreme Court agreed with Gideon, ruling that the Sixth Amendment guarantees the right to counsel for defendants who cannot afford one.

Gideon v. Wainwright established a right to counsel for all criminal defendants, regardless of their ability to pay. The ruling expanded criminal defendants rights and helped ensure that poor and marginalized individuals are not unfairly targeted or punished.

Ernesto Miranda was arrested and interrogated by police concerning a rape and kidnapping. During the interrogation, Miranda confessed to the crimes. However, the cops never told him about his right to remain silent or his ability to have an attorney present.

The prosecution used his confession as evidence, and he was sentenced to 20-30 years in prison.

Mirandas lawyers appealed, arguing that the police had violated his Fifth Amendment right against self-incrimination and his Sixth Amendment right to counsel by not informing him of those rights.

The Court ruled that police must inform suspects of their right to remain silent and their right to counsel before interrogation. Additionally, any statements obtained in violation of these rights cannot be used against the individual.

Miranda v. Arizona strengthened protections for criminal defendants and established clear guidelines for the police during interrogations. It is a crucial safeguard against coerced confessions and other abuses of power.

In 2007, Tarahrick Edwards was convicted of armed robbery and rape. Edwards appealed his conviction, arguing that the court minimized minority representation, allowing only one black individual to sit on the jury.

One juror voted to acquit Edwards, but due to Louisiana non-unanimous jury law, he was sentenced to life in prison. Edwards challenged his conviction stating that Louisianas non-unanimous jury conviction laws were unconstitutional.

After repeated attempts to overturn his conviction, the Supreme Court stated that non-unanimous jury verdicts could not be applied retroactively.

Edwards v. Vannoy clarified the retroactive application of the law in Ramos v. Louisiana (2020). The decision allows individuals convicted by non-unanimous juries to seek relief and challenge their convictions as long as the case happens after 2020.

In 1967, William Furman, a black man, was arrested for murder after he broke into a home and killed the homeowner. Furman was convicted, and the trial judge imposed the death penalty. However, under Georgia law at the time, the death penalty was not mandatory for anyone convicted of murder, leaving the sentencing to the discretion of the judge or jury.

Furman appealed his sentence to the U.S. Supreme Court, arguing that Georgias death penalty statute was unconstitutional because it allowed for arbitrary and discriminatory application. Specifically, Furman argued that the death penalty was more likely to be imposed on defendants who were black, poor, or otherwise disadvantaged.

In a 5-4 decision, the Supreme Court held that Georgias death penalty statute, as well as the death penalty statutes of other states, violated the Eighth Amendments prohibition against cruel and unusual punishment. The Court stated the death penalty was being imposed with no standards to guide its application and that it was unconstitutional.

The Court did not, however, hold that the death penalty itself was unconstitutional. Instead, it held that how it was being imposed was unconstitutional.

Furman v. Georgia forced states to revise their capital punishment statutes to include safeguards against discriminatory applications. In response, many states adopted new laws that required juries to consider mitigating factors before imposing the death penalty and provided an appellate review of death sentences.

In 1972, a group of burglars broke into the Democratic National Committee headquarters at the Watergate office complex in Washington, D.C. The burglars were later connected to President Richard Nixons re-election campaign. Soon after, an investigation began into the administrations role in the break-in and the subsequent cover-up.

Special prosecutor Archibald Cox issued a subpoena for Nixon to release recordings of conversations that had taken place in the Oval Office. Nixon refused to comply with the subpoena, citing executive privilege and arguing that the tapes contained confidential information that would harm national security if released.

In an 8-0 decision, the Supreme Court held that Nixon had no authority to withhold evidence relevant to a criminal trial. The Court rejected Nixons claims of executive privilege, finding that while the president had a constitutional duty to protect national security, this duty did not give him the power to override the law or obstruct justice.

As a result of the decision, Nixon was forced to release the recordings, which contained incriminating evidence that ultimately led to his resignation.

The United States v. Nixon decision established that no one, not even the president, is above the law. The decision affirmed the judiciarys power to hold the executive branch accountable and require the release of evidence in criminal trials.

The case remains an important precedent for cases involving executive privilege and the separation of powers between branches of government.

In 1981, police officers in Los Angeles obtained a search warrant based on information from a reliable informant. However, the warrant was later found to be invalid due to a technical error.

Nonetheless, the officers searched Leons home and found drugs and other evidence of drug trafficking. Leon was charged with drug offenses.

The case went to the U.S. Supreme Court; they created a good faith exception to the exclusionary rule, which allows evidence to be used at trial even if the warrant was later found to be defective, as long as the police acted in good faith when obtaining the warrant.

Some have praised the decision in United States v. Leon as a necessary step to balance the need for law enforcement with the protection of individual rights. In contrast, others have criticized it as weakening the Fourth Amendment protections against unreasonable searches and seizures.

In 1982, James Batson, a black man, was on trial for burglary and receiving stolen goods. During jury selection, the prosecutor used challenges to strike all four black potential jurors from the jury pool, leaving an all-white jury.

Batsons defense attorneys objected to the prosecutors use of peremptory challenges, arguing that it violated Batsons rights under the Fourteenth Amendments Equal Protection Clause. The trial court rejected the objection, and Batson was convicted and sentenced to prison.

On appeal, Batson argued that the prosecutors use of peremptory challenges to strike potential jurors based on race was unconstitutional.

The U.S. Supreme Court ruled that the prosecutors use of peremptory challenges to exclude jurors based solely on their race violated the Equal Protection Clause of the Fourteenth Amendment. The Court held that a defendant has the right to a jury pool selected without regard to race and that challenges cannot be used to exclude jurors based on race or ethnicity.

Batson v. Kentucky established an important precedent for ensuring racial fairness in jury selection. It signaled a shift away from the historical practice of excluding jurors based on race, and it established clear guidelines for preventing discrimination in the jury selection process.

The ruling has been hailed as a significant step forward in the fight for racial equality in the criminal justice system. However, some critics argue that the Batson rule has been difficult to enforce and has not gone far enough in addressing issues of racial bias in the criminal justice system.

In 1960, the Civil Rights Movement placed a full-page ad in the New York Times that criticized the treatment of civil rights protesters in the South.

L. B. Sullivan, the city commissioner of Montgomery, Alabama, sued the New York Times for defamation, claiming that the ad contained false statements about him and harmed his reputation. At trial, the jury awarded Sullivan $500,000 in damages.

The case reached the U.S. Supreme Court, ruling that the First Amendments protection of free speech and press extends to statements about public officials. Such officials must prove actual malice (i.e., knowledge of falsity or reckless disregard for the truth) to recover damages.

New York Times Co. v. Sullivan has significantly impacted the freedom of the press and the ability of individuals to criticize public officials without fear of being sued for defamation. The actual malice standard set by the Court became an essential element of First Amendment law. It has been applied to a wide range of cases involving media coverage of public officials and matters of public concern.

In 2009, James Kahler shot and killed his estranged wife, her grandmother, and his two teenage daughters. Kahler was charged with capital murder, found guilty, and sentenced to death.

Kahlers defense team argued that he was not responsible for his actions due to his mental illness. However, the trial judge instructed the jury that mental illness alone was insufficient to negate intent or justify a lesser charge of second-degree murder.

Kahler appealed his conviction, arguing that the trial judges instructions violated his Eighth Amendment rights against cruel and unusual punishment and his Fourteenth Amendment right to due process.

In a 6-3 decision, the U.S. Supreme Court held that the Kansas law prohibiting the use of mental illness as a defense to criminal charges did not violate the Eighth or Fourteenth Amendments.

The Court noted that while mental illness can mitigate criminal sentencing, it does not negate the mens rea or intent required for a conviction.

The Court further held that the specific jury instructions given in Kahlers case did not violate his constitutional rights, as they did not preclude the consideration of his mental illness as a mitigating factor in sentencing.

Some advocates have criticized the decision in Kahler v. Kansas, arguing that the ruling could discourage defendants from seeking help for their mental health issues and lead to more severe punishments for those with mental illnesses.

However, the decision also reflects a longstanding legal principle that criminal intent is a critical element of criminal law and that mental illness, while a mitigating factor, cannot excuse criminal behavior entirely.

View original post here:
Top 10 Court Cases That Changed the U.S. Justice System - Listverse

Explanatory predictive model for COVID-19 severity risk employing … – Nature.com

*The datasets used and/or analyzed during the current study are available from the corresponding author.

We used a casecontrol study for our research. All patients were recruited from Rabats Cheikh Zaid University Center Hospital. COVID-19 hospitalizations occurred between March 6, 2020, and May 20, 2020, and were screened using clinical features (fever, cough, dyspnea, fatigue, headache, chest pain, and pharyngeal discomfort) and epidemiological histology. Any patient admitted to Cheikh Zaid Hospital with a positive PCR-RT for SARS-CoV-2 was considered a COVID-19 case. According to the severity, the cases were divided into two categories: Cases with COVID symptoms and a positive RT-PCR test requiring oxygen therapy are considered severe. Case not requiring oxygen therapy: any case with or without COVID symptoms, normal lung CT with positive RT-PCR. The Controls were selected from Cheikh Zaid Hospital employees (two to three per week) who exhibited no clinical signs of COVID-19 and whose PCR-RT test was negative for the virus. People with chronic illnesses (high blood pressure, diabetes, cancer, and cardiovascular disease) and those who had used platelet-disrupting medications within the previous two weeks (Aspirin, Prasugrel, Clopidogrel, Ticagrelor, Cangrelor, Cilostazol, Dipyridamole, Abciximab, Eptifibatide, Tirofiban, Non-steroidal anti-inflammatory drugs) are excluded from our study (Fig.2).

Consequently, a total of 87 participants were selected for this study and divided as follows: 57 Patients infected with SARS-CoV-2: Thirty without severe COVID-19 symptoms, twenty-seven with severe symptoms requiring hospitalization, and thirty healthy controls. Table1 displays patients basic demographic and clinical information.

The cytokines investigated in our study are displayed in Table2, it consists of two panels, the first one contains 48 cytokines, while the second panel contains only 21 cytokines.

A data imputation procedure was considered for filling in missing values in entries. In fact, 29 individuals in our dataset had a missingness rate of more than 50 percent for their characteristics (cytokines), therefore our analysis will be significantly impacted by missing values. The most prevalent method for dealing with incomplete information is data imputation prior to classification, which entails estimating and filling in the missing values using known data.

There are a variety of imputation approaches, such as mean, k-nearest neighbors, regression, Bayesian estimation, etc. In this article, we apply the iterative imputation strategy Multiple imputation using chained equations Forest (Mice-Forest) to handle the issue of missing data. The reason for this decision is to employ an imputation approach that can handle any sort of input data and makes as few assumptions as possible about the datas structure55.the chained equation process is broken down into four core steps which are repeated until optimal results are achieved56. The first step involves replacing every missing data with the mean of the observed values for the variable. In the second phase, mean imputations are reset to missing. In the third step, the observed values of a variable (such as x) are regressed on the other variables, with x functioning as the dependent variable and the others as the independent variables. As the variables in this investigation are continuous, predictive mean matching (PPM) was applied.

The fourth stage involves replacing the missing data with the regression models predictions. This imputed value would subsequently be included alongside observed values for other variables in the independent variables. An Iteration is the recurrence of steps 2 through 4 for each variable with missing values. After one iteration, all missing values are replaced by regression predictions based on observed data. In the present study, we examined the results of 10 iterations.

The convergence of the regression coefficients is ideally the product of numerous iterations. After each iteration, the imputed values are replaced, and the number of iterations may vary. In the present study, we investigated the outcomes of 10 iterations. This is a single "imputation." Multiple imputations are performed by holding the observed values of all variables constant and just modifying the missing values to their appropriate imputation predictions. Depending on the number of imputations, this leads to the development of multiply imputed datasets (30, in this study). The number of imputations depends on the values that are missing. The selection of 30 imputations was based on the White et al.57 publication. The fraction of missing data was around 30%. We utilized the version 5.4.0 of the miceforest Python library to impute missing data. The values of the experiments hyper-parameters for the Mice-Forest technique are listed in Table3, and Fig.4 illustrates the distribution of each imputation comparing to original data (in red).

The distribution of each imputation compared to the original data (in red).

Machine learning frameworks have demonstrated their ability to deal with complex data structures, producing impressive results in a variety of fields, including health care. However, a large amount of data is required to train these models58. This is particularly challenging in this study because available datasets are limited (87 records and 48 attributes) due to acquisition accessibility and costs, such limited data cannot be used to analyze and develop models.

To solve this problem, Synthetic Data Generation (SDG) is one of the most promising approaches (SDG) and it opens up many opportunities for collaborative research, such as building prediction models and identifying patterns.

Synthetic Data is artificial data generated by a model trained or built to imitate the distributions (i.e., shape and variance) and structure (i.e., correlations among the variables) of actual data59,60. It has been studied for several modalities within healthcare, including biological signals61, medical pictures62, and electronic health records (EHR)63.

In this paper, a VAE network-based approach is suggested to generate 500 samples of synthetic cytokine data from real data. VAEs process consists of providing labeled sample data (X) to the Encoder, which captures the distribution of the deep feature (z), and the Decoder, which generates data from the deep feature (z) (Fig.1).

The VAE architecture preserved each samples probability and matched the column means to the actual data. Figure5 depicts this by plotting the mean of the real data column on the X-axis and the mean of the synthetic data column on the Y-axis.

Each point represents a column mean in the real and synthetic data. A perfect match would be indicated by all the points lying on the line y=x.

The cumulative feature sum is an extra technique for comparing synthetic and real data. The feature sum can be considered as the sum of patient diagnosis values. As shown in Fig.6, a comparison of the global distribution of feature sums reveals a significant similarity between the data distributions of synthetic and real data.

Plots of each feature in our actual dataset demonstrate the similarity between the synthesized and actual datasets.

Five distinct models are trained on synthetic data (Random Forest, XGBoost, Bagging Classifier, Decision Tree, and Gradient boosting Classifier). Real data is used for testing, and three metrics were applied to quantify the performance of fitting: precision, recall, F1 score, and confusion matrix.

As shown in Figs.7, 8, 9, 10 and 11 the performance of the Gradient Boosting Classifier proved to be superior to that of other models, with higher Precision, Recall, and F1 score for each class, and a single misclassification. Consequently, we expect that SHAP and LIMEs interpretation of the Gradient Boosting model for the testing set will reflect accurate and exhaustive information for the cytokines data set.

Matrix confusion and Report Classification of Random Forest.

Matrix confusion and Report Classification of Gradient Boosting.

Matrix confusion and Report Classification of XGB Classifier.

Matrix confusion and Report Classification of Bagging Classifier.

Matrix confusion and Report Classification of Decision Tree.

Explaining a prediction refers to the presentation of written or visual artifacts that enable qualitative knowledge of the relationship between the instances components and the models prediction. We suggest that if the explanations are accurate and understandable, explaining predictions is an essential component of convincing humans to trust and use machine learning effectively43. Figure12 depicts the process of explaining individual predictions using LIME and SHAP as approaches that resemble the classifiers black box to explain individual predictions. When explanations are provided, a doctor is clearly in a much better position to decide using a model. Gradient Boosting predicts whether a patient has an acute case of COVID-19 in our study, whereas LIME and SHAP highlight the cytokines that contributed to this prediction.

The Flow chart demonstrates how Machine learning can be used to make medical decisions. We entered cytokine data from severe, non-severe, and healthy patients, trained predictive models on cytokine data, and then used LIME and SHAP to explain the most important cytokine for each class of patients (Fig.12).

The SHAP explanation utilized in this study is the Kernel Explainer, a model-agnostic approach that produces a weighted linear regression depending on the data, predictions, and model64. It examines the contribution of a feature by evaluating the model output if the feature is removed from the input for various (theoretically all) combinations of features. The Kernel Explainer makes use of a backdrop dataset to demonstrate how missing inputs are defined, i.e., how a missing feature is approximated during the toggling process.

SHAP computes the impact of each characteristic on the learned systems predictions. Using gradient descent, SHAP values are created for a single prediction (local explanations) and multiple samples (resulting in global explanations).

Figure13 illustrates the top 20 SHAP value features for each class in the cytokine data prediction model (Healthy, Severe, and Non-Severe classes). The distribution of SHAP values for each feature is illustrated using a violin diagram. Here, the displayed characteristics are ordered by their highest SHAP value. The horizontal axis represents the SHAP value. The bigger the positive SHAP value, the greater the positive effect of the feature, and vice versa. The color represents the magnitude of a characteristic value. The color shifts from red to blue as the features value increases and decreases. For example, Mip-1b in Figure8, the positive SHAP value increases as the value of the feature increases. This may be interpreted as the probability of a patient developing COVID-19, severity increasing as MIP-1b levels rise.

Examples of SHAP values computed for individuals predictions (local explanations) for Healthy, Non-Sever, and Sever patients.

In the situation of a healthy patient, TNF, IL-22, and IL-27 are the most influential cytokines, as shown in Fig.14s first SHAP diagram (from left). The second diagram is for a patient with severity, and we can observe that the VEGF-A cytokines value is given greater weight. This can be viewed as an indication that the patient got a serious COVID-19 infection due to the increase in this cytokine.

SHAP diagrams of characteristics with varying conditions: Healthy, Severe, and Non-Severe, respectively.

The last SHAP diagram depicts an instance of a non-Severe patient, and we can see that the higher the feature value, the more positive the direction of IL-27. On the other hand, MDC, PDGF-AB/BB, and VEGF-A cytokines have a deleterious effect. The levels of MDC and PDGF-AB/BB cytokines suggest that the patient may be recovering, however, the presence of VEGF-A suggests that the patient may develop a severe case of COVID-19, despite being underweight.

LIME is a graphical approach that helps explain specific predictions. It can be applied to any supervised regression or classification model, as its name suggests. Behind the operation of LIME is the premise that every complex model is linear on a local scale and that it is possible to fit a simple model to a single observation that mimics the behavior of the global model at that locality. LIME operates in our context by sampling the data surrounding a prediction and training a simple interpretable model to approximate the black box of the Gradient Boosting model. The interpretable model is used to explain the predictions of the black-box model in a local region surrounding the prediction by generating explanations regarding the contributions of the features to these predictions. As shown in Fig.15, a bar chart depicts the distribution of LIME values for each feature, indicating the relative importance of each cytokine for predicting Severity in each instance. The order of shown features corresponds to their LIME value.

In the illustrations explaining various LIME predictions presented in Fig.16. We note that the model has a high degree of confidence that the condition of these patients is Severe, Non-Severe, or Healthy. In the graph where the predicted value is 2, indicating that the expected scenario for this patient is Severe (which is right), we can see for this patient that Mip-1b level greater than 41 and VEGF-A level greater than 62 have the greatest influence on severity, increasing it. However, MCP-3 and IL-15 cytokines have a negligible effect in the other direction.

Explaining individual predictions of Gradient descent classifier by LIME.

Alternatively, there are numerous cytokines with significant levels that influence non-Severity. For example, IL-27 and IL-9, as shown in the middle graph in Fig.14. and that IL-12p40 below a certain value may have the opposite effect on model decision-making. RANTES levels less than 519, on the other hand, indicate that the patient is healthy, as shown in Fig.16.

By comparing the individuals explanation of SHAP values to the individuals explanation of LIME values for the same patients, we may be able to determine how these two models differ in explaining the Severity results of the Gradient descent model. As a result, we can validate and gain insight into the impact of the most significant factors. To do so, we begin by calculating the frequency of the top ten features among all patients for each Explainer. We only consider features that appear in the top three positions, as we believe this signifies the features high value, and we only consider the highest-scoring features that appear at least ten times across all SHAP or LIME explanations (Tables 4, 5, and 6).

Table4 demonstrates that MIP-1b, VEGF-A, and IL-17A have Unanimous Importance according to the SHAP Value and LIME. In addition, we can remark that M-CSF is necessary for LIME but is ranks poor.

In the instance of non-Severity, Table5 reveals that IL-27 and IL-9 are essential in both explanatory models for understanding non-Severity in patients. We can see that IL-12p40 and MCP-3 are also essential for LIME and are highly ranked; hence, we add these two characteristics to the list of vital features for the non-Severity instance. RANTES, TNF, IL-9, IL-27, and MIP-1b are the most significant elements in the Healthy scenario, according to Table6.

The elements that explain the severity of the COVID-19 sickness are summarized in Table7.

See the rest here:
Explanatory predictive model for COVID-19 severity risk employing ... - Nature.com

A new look at the lives of ultra-Orthodox Jews: Shtetl.org provides … – New York Daily News

In a jammed media sphere littered with the crumbling shells of failed news sites, its hard to imagine a new one whose mission could really set it apart. Nevertheless, when Naftuli Moster, a former Hasid known for his tireless advocacy for improved secular education in the ultra-Orthodox community, asked me to help launch his new project, I jumped at the chance; I saw that contrary to King Solomons famous pronouncement, this was, after all, something new under the sun:

Shtetl.org, focused on New Yorks rapidly growing ultra-Orthodox community, will be the first such news outlet not under the control of an established rabbinical leader or sect from within that community. With aspirations to meet the highest standards of traditional journalism, this new English-language publications aim is to report without fear or favor a mission that promises to at once shake up, inform and illuminate an insular civic sector whose growing presence and clout reverberate far beyond its redoubts in Brooklyn neighborhoods such as Borough Park and Williamsburg, and upstate New York towns such as Monsey and Kiryas Joel.

Ultra-Orthodox Jews dressed for the Passover holiday stand outside the New Jersey Center for the Performing Arts (NJPAC), April 24, 2019, in Newark, N.J. (Kathy Willens/AP)

Still, unlike Moster, I am myself a secular Jew; one who spent many years as news and investigations editor at The Forward, a well-known liberal media outlet steeped in secular Jewish identity, and as an investigative reporter for the Daily News. What could I possibly bring to the table?

For this, there is a backstory. It informs my own motivations and my hopes about what this new outlet can be.

By the time I turned 25, I had lived with Tibetan Buddhists in the Himalayas; an underground cell of Christian missionaries in Afghanistan, and with Sufis in Shiraz, Iran but until I knocked on a strangers door in the Maalot Dafna neighborhood of East Jerusalem, I had never met a Haredi Jew.

It was there, in 1978, that I met and befriended a chozer bteshuva, or returnee to faith, as formerly secular Jews are known the son of prominent Israeli academics I knew. A former left-wing activist, my new friend now lived in a Jerusalem community that sought to maintain the lifeways of the Eastern European ghetto. Taking me through his neighborhood of narrow streets with bearded men in black coats and women in sheitels and long skirts, he brought me to his class at Yeshiva Ohr Somayach, a still new institution at the time, funded by wealthy North American Jews and housed in a block-long building of gleaming white Jerusalem stone.

I was entranced enough, it turned out, to spend a good portion of my 10 months in Israel studying there and living in a community wholly foreign to anything Id previously encountered.

It was through Ohr Somayachs approach to teaching Scripture that I learned for the first time what close reading really meant a mode of critical engagement with texts quite unlike anything Id learned in high school or college. It was also my encounter with a formidable system of thought whose sexism and ethnic chauvinism shook me deeply. I ultimately turned in a different direction. But paradoxically, I owe to this confrontation in my mid-20s a sharpened mind and a greatly deepened sense of Jewish identity.

Women pushing strollers walk past the Yeshiva Kehilath Yakov School in the South Williamsburg neighborhood, April 9, 2019 in Brooklyn. (Drew Angerer/Getty Images)

During this sojourn, I lived in a community whose ethos of mutual support and solidarity taught me lessons that have stayed with me to this day. It has helped inform my faith in everything from the redistribution of wealth in programs like Social Security and Medicare, to my belief in the centrality of decency and compassion as the existential cornerstones of a viable polity.

At the same time, I was astonished at some of the conversations Id find myself in with brilliant men the yeshivas were all male whod grown up in this world. Amid complex legal discussions, they would simply stare at me blankly when Id make references in passing to: Chairman Mao; feudalism; antibodies; Neanderthals; Tahiti; Fidel Castro; the U.S. Constitutions Fourth Amendment, and Charles Darwin, to name but a few.

It wasnt until decades later that I understood why. Working as a reporter for prominent Jewish newspapers, I learned, to my surprise, that many ultra-Orthodox Jews never looked at those papers, much less non-Jewish news outlets. Nor was television permitted in their homes. Haredi rabbis condemn these outside news sources, instead authorizing only news sources they or their factions control, directly or indirectly.

These publications offer a strictly authorized version of reality, with results that can range from comic to cruel. In one instance, the Brooklyn Yiddish weekly Di Tzeitung was forced to apologize to the Obama White House in 2011 for airbrushing Secretary of State Hillary Clinton out of an historic Situation Room gathering, a move in line with its policy of banning female images to maintain sexual modesty. The iconic photograph, whose usage agreement banned such airbrushing, captured President Barack Obama and key members of his national security team gathered around a monitor watching as Navy SEALs in Pakistan closed in on 9/11 mastermind Osama Bin Laden.

More disturbingly, and much closer to home, one Hasidic newspapers recent campaign on behalf of a convicted child sex abuser laid the ground for the grand rebbe of one of the largest Hasidic sects to honor him with a highly-publicized pilgrimage to visit the abuser in prison. The campaign and the November visit took place against the backdrop of a continuing effort by Rabbi Zalman Teitelbaums Satmar sect to win commutation of the 50-year sentence being served by Nechemya Weberman. Not coincidentally, Vochenshrift, the Yiddish newspaper that conducted the campaign, is loyal to Teitelbaums faction.

An Orthodox jewish man walks through the Borough Park neighborhood on the eve of the Passover holiday on April 8, 2020 in New York. (Spencer Platt/Getty Images)

Weberman, now 64, was convicted in 2012 on 59 counts for repeated sexual assault, including rape, of a teen member of the sect whom he was treating as an unlicensed therapist starting from when she was 12. (Two counts were later reversed on appeal.) A Daily News article identified 10 other young women who claimed that Weberman had sexually assaulted them but reported that they were too afraid to come forward and face the shunning and intimidation that sect members inflicted on the accuser.

Vochenshrifts series on Weberman, which started in August, lionized him as a tremendous Hasid and victim of mesira, a grave sin wherein one Jew informs on another in contravention of Jewish law. The articles inspired a parade of solidarity visits to Weberman by other Hasidim, climaxed by the grand rebbes journey.

They say hes wrongfully accused, Shulim Leifer, a member of the Hasidic community, told JTA. Its written in a sense that its a foregone conclusion, that its a lynching that he went through.

Given the media environment in which they live, its little wonder that many Hasidim would look at the case this way. Moreover, these tightly controlled media outlets inspire reactions with real-world political consequences. During the trial, the young woman suffered widespread condemnation as a zona, or whore, and threats from other Satmar Hasidim for daring to report her abuse to secular law enforcement authorities.

Prior to the trial, more than 1,000 Hasidic men flocked to a banquet that raised an estimated $500,000 for Webermans defense. Brooklyns then-district attorney, Charles Hynes, prosecuted Weberman his first ever high-profile case against a member of the boroughs Satmar community only after sustained criticism that he had for many years shrunk from pursuing such trials. Hynes denied the charge. But as Leon Goldenberg, an Orthodox political activist noted at the time, The fact is that [Orthodox Jews] make up 10 to 15% of the electorate.

The Daily News Flash

Weekdays

Catch up on the days top five stories every weekday afternoon.

More recently, Hynes successor, Eric Gonzalez, called on the governor to commute Webermans sentence the only instance of such an appeal by Gonzalez on behalf of a convicted sex abuser, according to The City. Gonzalezs August 2021 letter to the governor, which went unanswered, may have been ill-timed. It arrived on Andrew Cuomos last day in office, following his resignation in a scandal.

Mobilized by their distorted media bubble, this voting bloc intimidates city and state leaders from enforcing laws on everything from fire codes to education.

In 2011, the chief of the fire department responsible for New Square, a Hasidic enclave of almost 10,000 in Rockland County, told The Forward that at least 60% of its structures had serious code violations. Rockland County lawmaker Joe Meyers was blunt about why. New Square has a lot of power to deliver votes in elections, he said. Officials who otherwise do their jobs fall down when it comes to New Square.

As mayor of New York City, Bill de Blasio was no less mindful of this blocs power. In an official 2019 report, city investigators cited political horse-trading between his representatives and state legislators as the reason for a one-year delay in the citys release of a report finding Hasidic yeshivas were failing to give their students an adequate, legally required secular education.

We can counterbalance the hold that rabbinically controlled media outlets maintain on their readers, many of whom are actually hungry for news that directly impacts their lives. Quietly ignoring the rabbinical ban on the internet, they seek news out on anonymized laptops or second mobile phones in the privacy of their own homes. Shtetl will focus on this audiences concerns. This holds the potential to cultivate a cohort whose information horizons will extend beyond the narrow limits dictated by their leaders. Shtetls reports and investigations will also inform political leaders, journalists, civic leaders and taxpayers outside the ultra-Orthodox community about the many issues whose ramifications affect everyone.

For me, it even dangles the promise, eventually, of being able, after so many decades, to hold discussions about topics ranging from Darwin to democracy with some of the best trained minds I have ever encountered. Thats why I agreed to join Shtetls board and hope to contribute to its success.

Cohler-Esses, a former Daily News investigative reporter, is a board member of Shtetl-Haredi Free Press.

Visit link:
A new look at the lives of ultra-Orthodox Jews: Shtetl.org provides ... - New York Daily News

What Is Few Shot Learning? (Definition, Applications) – Built In

Few-shot learning is a subfield of machine learning and deep learning that aims to teach AI models how to learn from only a small number of labeled training data. The goal of few-shot learning is to enable models to generalize new, unseen data samples based on a small number of samples we give them during the training process.

In general, few-shot learning involves training a model on a set of tasks, each of which consists of a small number of labeled samples. We train the model to learn how to recognize patterns in the data and use this knowledge.

One challenge of traditional machine learning is the fact that training models require large amounts of training data with labeled training samples. Training on a large data set allows machine learning models to generalize new, unseen data samples. However, in many real-world scenarios, obtaining a large amount of labeled data can be very difficult, expensive, time consuming or all of the above. This is where few-shot learning comes into play. Few-shot learning enables machine learning models to learn from only a few labeled data samples.

More From This Expert5 Deep Learning and Neural Network Activation Functions to Know

One reason few-shot learning is important is because it makes developing machine learning models in real-world settings feasible. In many real-world scenarios, it can be challenging to obtain a large data set we can use to train a machine learning model. Learning on a smaller training data set can significantly reduce the cost and effort required to train machine learning models. Few-shot learning makes this possible because the technique enables models to learn from only a small amount of data.

Few-shot learning can also enable the development of more flexible and adaptive machine learning systems. Traditional machine learning algorithms are typically designed to perform well on specific tasks and are trained on huge data sets with a large number of labeled examples. This means that algorithms may not generalize well to new, unseen data or perform well on tasks that are significantly different from the ones on which they were trained.

Few-shot learning solves this challenge by enabling machine learning models to learn how to learn and adapt quickly to new tasks based on a small number of labeled examples. As a result, the models become more flexible and adaptable.

Few-shot learning has many potential applications in areas such as computer vision, natural language processing (NLP) and robotics. For example, when we use few-shot learning in robotics, robots can quickly learn new tasks based on just a few examples. In natural language processing, language models can better learn new languages or dialects with minimal training data.

An error occurred.

Few-shot learning has become a promising approach for solving problems where data is limited. Here are three of the most promising approaches for few-shot learning.

Meta-learning, also known as learning to learn, involves training a model to learn the underlying structure (or meta-knowledge) of a task. Meta-learning has shown promising results for few-shot learning tasks where the model is trained on a set of tasks and learns to generalize to new tasks by learning just a few data samples. During the meta-learning process, we can train the model using meta-learning algorithms such as model-agnostic meta-learning (MALM) or by using prototypical networks.

Data augmentation refers to a technique wherein new training data samples are created by applying various transformations to the existing training data set. One major advantage of this approach is that it can improve the generalization of machine learning models in many computer vision tasks, including few-shot learning.

For computer vision tasks, data augmentation involves techniques like rotation, flipping, scaling and color jittering existing images to generate additional image samples for each class. We then add these additional images to the existing data set, which we can then use to train a few-shot learning model.

Generative models, such as variational autoencoders (VAEs) and generative adversarial networks (GANs) have shown promising results for few-shot learning. These models are able to generate new data points that are similar to the training data.

In the context of few-shot learning, we can use generative models to augment the existing data with additional examples. The model does this by generating new examples that are similar to the few labeled examples available. We can also use generative models to generate examples for new classes that are not present in the training data. By doing so, generative models can help expand the data set for training and improve the performance of the few-shot learning algorithm.

In computer vision, we can apply few-shot learning to image classification tasks wherein our goal is to classify images into different categories. In this example, we can use few-shot learning to train a machine learning model to classify images with a limited amount of labeled data. Labeled data refers to a set of images with corresponding labels, which indicate the category or class to which each image belongs. In computer vision, obtaining a large number of labeled data is often difficult. For this reason, few-shot learning might be helpful since it allows machine learning models to learn on fewer labeled data.

Few-shot learning can be applied to various NLP tasks like text classification, sentiment analysis and language translation. For instance, in text classification, few-shot learning algorithms could learn to classify text into different categories with only a small number of labeled text examples. This approach can be particularly useful for tasks in the area of spam detection, topic classification and sentiment analysis.

Related Reading From Built In ExpertsWhat Are Self-Driving Cars?

In robotics, we can apply few-shot learning to tasks like object manipulation and motion planning. Few-shot learning can enable robots to learn to manipulate objects or plan their movement trajectories by using small amounts of training data. For robotics, the training data typically consists of demonstrations or sensor data.

In medical imaging, learning from only a few exposures can help us train machine learning models for medical imaging tasks such as tumor segmentation and disease classification. In medicine, the number of available images is usually limited due to strict legal regulations and data protection laws around medical information. As a result, there is less data available on which to train machine learning models. Few-shot learning solves this problem because it enables machine learning models to successfully learn to perform the mentioned tasks on a limited data set.

See original here:
What Is Few Shot Learning? (Definition, Applications) - Built In