Archive for the ‘Media Control’ Category

Increased Use of Telehealth Services and Medications for Opioid … – CDC

The expanded availability of opioid use disorder-related telehealth services and medications during the COVID-19 pandemic was associated with a lowered likelihood of fatal drug overdose among Medicare beneficiaries, according to a new study.

The results of this study add to the growing research documenting the benefits of expanding the use of telehealth services for people with opioid use disorder, as well as the need to improve retention and access to medication treatment for opioid use disorder, said lead author Christopher M. Jones, PharmD, DrPH, Director of the National Center for Injury Prevention and Control, CDC. The findings from this collaborative study also highlight the importance of working across agencies to identify successful strategies to address and get ahead of the constantly evolving overdose crisis.

Published today in JAMA Psychiatry, this study is a collaborative research effort led by researchers at the National Center for Injury Prevention and Control, a part of the Centers for Disease Control and Prevention (CDC); the Office of the Administrator and the Center for Clinical Standards and Quality, both part of the Centers for Medicare & Medicaid Services (CMS); and the National Institute on Drug Abuse, a part of the National Institutes of Health (NIH).

CMS is committed to ensuring that the beneficiaries we serve can access the high-quality behavioral health services they need, said senior author Dr. Shari Ling, M.D., Deputy Chief Medical Officer at CMS. This study shows that many beneficiaries were able to utilize opioid use disorder-related telehealth services during the pandemic, but we need to continue our efforts to broaden the use of telehealth, particularly in underserved communities.

In this national study, researchers analyzed data among two cohorts of Medicare beneficiaries to explore receipt of opioid use disorder-related telehealth services, receipt of medications for opioid use disorder, and fatal overdoses before and during the COVID-19 pandemic. To do this, they compared data from two cohorts of Medicare beneficiaries across two time periods. The first cohort was constructed with data from September 2018-February 2020 and included 105,162 Medicare beneficiaries with opioid use disorder (the pre-pandemic cohort). The second cohort was constructed with data from September 2019-February 2021 and included 70,479 Medicare beneficiaries with opioid use disorder, (the pandemic cohort). In addition, the researchers conducted an analysis to examine the demographic and clinical characteristics associated with fatal overdose in the pandemic cohort.

Key findings of this study include:

At a time when more than 100,000 Americans are now dying annually from a drug overdose, the need to expand equitable access to lifesaving treatment, including medications for opioid use disorder, has never been greater, said Wilson Compton, M.D., M.P.E, deputy director of the National Institute on Drug Abuse and senior author of the study. Research continues to indicate that expanded access to telehealth is a safe, effective, and possibly even lifesaving tool for caring for people with opioid use disorder, which may have a longer-term positive impact if continued.

Although the results of this study were able to identify the positive impact opioid use disorder-related telehealth services had on lowering the risk for fatal drug overdose in the pandemic cohort, the authors note that only 1 in 5 Medicare beneficiaries in the pandemic cohort received OUD-related telehealth services. Similarly, only 1 in 8 beneficiaries in the pandemic cohort received medications for opioid use disorder. These findings underscore the need for continued expansion of these potentially life-saving interventions across clinical settings.

Find Treatment for Substance Use Disorder, including Opioid Use Disorder:

If you or someoneclose to youneeds help for a substance use disorder, talk to your doctor or call SAMHSAs National Helpline at1-800-662-HELPor go toSAMHSAs Behavioral Health Treatment Services

Additional Resources:

If you have questions about any medicines, call the U.S. Department of Health and Human Services Poison Help Hotline at1-800-222-1222.

Read more:
Increased Use of Telehealth Services and Medications for Opioid ... - CDC

Taliban Close Women’s Radio Station for Airing Music – Voice of America – VOA News

The Taliban have closed a local women's radio station in the northeastern province of Badakhshan for broadcasting music. Media watchdogs considered the move an attempt to bar women from working in media in the province.

On Friday, Mazuddin Ahmadi, the Taliban's director of information and culture for Badakhshan province, told VOA that Radio Sada-e-Banowan (Voice of Women Radio) was closed because it "violated policies."

"We repeatedly told them that airing music is forbidden and you should not air music," said Ahmadi. "Unfortunately, during the month of Ramadan, they ignored several warnings and aired music. Finally, yesterday, after consulting with elders, we closed the radio station."

Ahmadi said that "the closure is temporary, and if the radio officials guarantee that they will not air music, we will allow the radio to broadcast again."

But a local journalist in the province who has knowledge of the case and asked to remain anonymous for fear of reprisal told VOA that the radio station's local programs did not include any music.

"The Taliban claimed that music was aired, but they did not say when and what type of music was aired," the journalist said.

He added that even before the Taliban's takeover, radio stations in Badakhshan province had avoided broadcasting music during the holy month of Ramadan.

The journalist said the local Taliban authorities "planned to close the women-run radio in the province months ago, and they did it during the month of Ramadan."

Radio Sada-e-Banowan was one of the few Afghan radio stations run by women that had been operating since the Taliban takeover in August 2021.

In Afghanistan, Some Female Journalists Find Ways to Stay on Air

Paris-based Reporters Without Borders reported that there were no female journalists in 11 of Afghanistan's 34 provinces, and that about 600 of the 2,700 female reporters active before the Taliban took control were still working in the country.

Afghan Women Absent From Jobs and Stories in Media

After returning to power, the Taliban imposed repressive measures on women, including banning them from work, secondary and university education, and unaccompanied long-distance travel.

The Taliban in Badakhshan "asked women media workers not to go to offices. Therefore, women working with the radio station were producing their shows at home and then airing them on the radio," said Gul Mohammad Graan, president of the Afghan chapter of the South Asian Association of Reporters Club and Journalists Forum.

No media law

Graan told VOA that the problem arises from a misunderstanding on the part of implementing the media law.

"I think there is a legal vacuum, and it has created misunderstanding. Radio and other media operators and managers do not know what to broadcast, and [the Taliban's] government officials in different provinces have their own interpretation of the broadcasting directives."

In September 2021, shortly after coming to power, the Taliban issued broadcasting directives that media watchdogs interpreted as a sign that the Taliban planned to censor media content in the country.

"Notifications and letters are not enough," Graan said. "If the previous media law does not have any problem, then it should be enacted. If not, then they should come up with a new media law."

In February 2022, spokesman Zabihullah Mujahid said the Taliban had no issue with enacting the media law under the former government. He promised to revive the Joint Media and Media Violation commissions.

According to media watchdogs, the press freedom situation has deteriorated in Taliban-run Afghanistan, where media face censorship and violence and women's voices are largely silenced.

Facing worst situation

Hamid Obaidi, a former journalism lecturer at Kabul University and the head of the Afghanistan Journalists Support Organization, told VOA he believes the closure of women-run radio in Badakhshan is an attempt to stop women from working in media.

"In the past several months, we have witnessed that women journalists and media workers face repressive restrictions. This means that the Taliban have problems with women journalists working, and they aim to stop women from working," he said.

He added that the situation for female journalists in Afghanistan has become increasingly difficult under the Taliban.

"Women journalists who continue to work face the worst situation in Afghanistan," he said.

Ekram Shinwari and VOA's Afghan Service contributed to this report.

See the original post:
Taliban Close Women's Radio Station for Airing Music - Voice of America - VOA News

AI has much to offer humanity. It could also wreak terrible harm. It … – The Guardian

In case you have been somewhere else in the solar system, here is a brief AI news update. My apologies if it sounds like the opening paragraph of a bad science fiction novel.

On 14 March 2023, OpenAI, a company based in San Francisco and part owned by Microsoft, released an AI system called GPT-4. On 22 March, a report by a distinguished group of researchers at Microsoft, including two members of the US National Academies, claimed that GPT-4 exhibits sparks of artificial general intelligence. (Artificial general intelligence, or AGI, is a keyword for AI systems that match or exceed human capabilities across the full range of tasks to which the human mind is applicable.) On 29 March, the Future of Life Institute, a non-profit headed by the MIT physics professor Max Tegmark, released an open letter asking for a pause on giant AI experiments. It has been signed by well-known figures such as Teslas CEO, Elon Musk, Apples co-founder Steve Wozniak, and the Turing award-winner Yoshua Bengio, as well as hundreds of prominent AI researchers. The ensuing media hurricane continues.

I also signed the letter, in the hope it will (at least) lead to a serious and focused conversation among policymakers, tech companies and the AI research community on what kinds of safeguards are needed before we move forward. The time for saying that this is just pure research has long since passed.

So what is the fuss all about? GPT-4, the proximal cause, is the latest example of a large language model, or LLM. Think of an LLM as a very large circuit with (in this case) a trillion tunable parameters. It starts out as a blank slate and is trained with tens of trillions of words of text as much as all the books humanity has produced. Its objective is to become good at predicting the next word in a sequence of words. After about a billion trillion random perturbations of the parameters, it becomes very good.

The capabilities of the resulting system are remarkable. According to OpenAIs website, GPT-4 scores in the top few per cent of humans across a wide range of university entrance and postgraduate exams. It can describe Pythagorass theorem in the form of a Shakespeare sonnet and critique a cabinet ministers draft speech from the viewpoint of an MP from any political party. Every day, startling new abilities are discovered. Not surprisingly, thousands of corporations, large and small, are looking for ways to monetise this unlimited supply of nearly free intelligence. LLMs can perform many of the tasks that comprise the jobs of hundreds of millions of people anyone whose work is language-in, language-out. More optimistically, tools built with LLMs might be able to deliver highly personalised education the world over.

Unfortunately, LLMs are notorious for hallucinating generating completely false answers, often supported by fictitious citations because their training has no connection to an outside world. They are perfect tools for disinformation and some assist with and even encourage suicide. To its credit, OpenAI suggests avoiding high-stakes uses altogether, but no one seems to be paying attention. OpenAIs own tests showed that GPT-4 could deliberately lie to a human worker (No, Im not a robot. I have a vision impairment that makes it hard for me to see the images) in order to get help solving a captcha test designed to block non-humans.

While OpenAI has made strenuous efforts to get GPT-4 to behave itself GPT-4 responds to sensitive requests (eg medical advice and self-harm) in accordance with our policies 29% more often the core problem is that neither OpenAI nor anyone else has any real idea how GPT-4 works. I asked Sbastien Bubeck, lead author on the sparks paper, whether GPT-4 has developed its own internal goals and is pursuing them. The answer? We have no idea. Reasonable people might suggest that its irresponsible to deploy on a global scale a system that operates according to unknown internal principles, shows sparks of AGI and may or may not be pursuing its own internal goals. At the moment, there are technical reasons to suppose that GPT-4 is limited in its ability to form and execute complex plans but given the rate of progress, its hard to say that future releases wont have this ability. And this leads to one of the main concerns underlying the open letter: how do we retain power over entities more powerful than us, for ever?

OpenAI and Microsoft cannot have it both ways. They cannot deploy systems displaying sparks of AGI and simultaneously argue against any regulation, as Microsofts president, Brad Smith, did at Davos earlier this year. The basic idea of the open letters proposed moratorium is that no such system should be released until the developer can show convincingly it does not present an undue risk. This is exactly in accord with the OECDs AI principles, to which the UK, the US and many other governments have signed up: AI systems should be robust, secure and safe throughout their entire life cycle so that, in conditions of normal use, foreseeable use or misuse, or other adverse conditions, they function appropriately and do not pose unreasonable safety risk. It is for the developer to show that their systems meet these criteria. If thats not possible, so be it.

I dont imagine that Ill get a call tomorrow from Microsofts CEO, Satya Nadella, saying: OK, we give up, well stop. In fact, at a recent talk in Berkeley, Bubeck suggested there was no possibility that all the big tech companies would stop unless governments intervened. It is therefore imperative that governments initiate serious discussions with experts, tech companies and each other. Its in no countrys interest for any country to develop and release AI systems we cannot control. Insisting on sensible precautions is not anti-industry. Chernobyl destroyed lives, but it also decimated the global nuclear industry. Im an AI researcher. I do not want my field of research destroyed. Humanity has much to gain from AI, but also everything to lose.

Stuart Russell OBE is professor of computer science at the University of California, Berkeley

Original post:
AI has much to offer humanity. It could also wreak terrible harm. It ... - The Guardian

Fear of losing control: how brands can thrive in the creator economy … – Shots

Brands have long looked beyond the scope of their influence to borrow someone elses shine - from celebrity endorsers and brand spokespeople to infomercials, influencers, and #sponsored posts.

Its an approach as old as advertising, hoping to drive increased awareness, favourability, or trial. But the catch-22 of borrowing influence has always been brands wrestling with the fear of losing control of the narrative.

In recent years, influencer marketing has shapeshifted into what brands and industry pundits call the creator economy - a rebrand even Don Draper would be proud of. As the current dominance of TikTok moves us from the social media age into a post-social era, brands are scrambling for innovative ways to authentically connect with their increasingly fragmented audiences. How can brands who fear sacrificing creative control navigate the creator economy?

With the rise of the social media influencer over the past decade-plus, brands tested the limits of said influence and learned valuable lessons along the way. The social media age and an influx of brand-sponsored content brought with it new cultural norms and even FCC regulations (#ad), but also put influencer authenticity under a magnifying glass. Brands wanted to control the narrative and used influencers for their reach rather than their taste or talent. An increasingly discerning social audience called BS on the contrived sponsored posts, and influencer fatigue ran rampant as brands struggled to nail the authenticity their audiences desired.

Original post:
Fear of losing control: how brands can thrive in the creator economy ... - Shots

Opinion | The Extraordinarily Misguided Attack on TikTok – POLITICO

The stated concern is that because TikToks parent company is Chinese owned, the government in Beijing could ultimately access data on hundreds of millions of American users. As FBI Director Christopher Wray said, This is a tool that is ultimately within the control of the Chinese government and it, to me, it screams out with national security concerns. The other concern held by some critics is that the Chinese government could use TikToks algorithms to barrage American users with disinformation and propaganda, potentially creating domestic havoc in the United States.

These issues cant be dismissed out right, but they are almost certainly overblown, according to security experts.

The data of TikTok users age, region, passwords, names, buying habits is no different than that collected by countless online merchants and other social media sites. While that data is private and encrypted, much of it can either be scraped anonymously (and often is for use in the vast and profitable commercial data market) or already accessed by cyber spy agencies. User data isnt particularly secure anywhere. Whatever the Chinese government wanted to glean from TikTok users, it likely can glean anyway, regardless of where that data is stored.

Then theres the chaos engine theory that TikTok on instructions from the Chinese government could sow confusion in domestic politics or promote a certain ideology in the United States. It has echoes of Russian meddling in the 2016 election, which naturally causes some alarm. But while a foreign government can try to use social media to spread disinformation and spur division, the net effect of that in the context of so much other noise in the cyber world is unclear. Could it amplify an already fractious political climate? Maybe, but almost certainly not on its own and not in any clear directional way, and that assumed full and total control of TikTok by Beijing, which is something hardly anyone currently believes or alleges.

But lets say that the Communist Party of China could and will use TikTok. Even then, banning the app is a terrible idea for the United States. Why? Because the foundational strength of the United States is that it is an open society where information can and does flow freely. Banning TikTok, a platform of often astonishingly creative and often incredibly banal content that reaches 150 million Americans, is a step back from an open society and toward a closed one.

That is why the United States mulling a TikTok ban is a very different thing than, say, India, which has already barred TikTok. The government of Narendra Modi in India has been tightening its censorship in multiple spheres, and its moves against TikTok and other Chinese apps are part of a broader attempt to control information. The United States, however, has a rich tradition of free speech and has erected a legal apparatus designed to protect it and encourage the open flow of information. Its not just the First Amendment to the Constitution and subsequent court cases and precedent designed to bolster the right of free expression; its the implied link between a healthy, robust democracy and the ability to communicate all ideas, even ones that many find wrong and reprehensible, without fear of censorship or government suppression.

Indian Prime Minister Narendra Modi has already banned TikTok and has been tightening control over the information in multiple spheres.|Lintao Zhang/Getty Images

The Chinese government holds no such values, and indeed it believes that information should first and foremost serve the interests of the state. Yes, the Chinese constitution does provide for the right of free speech but not if such speech undermines the interests of the state. Free speech in China is not seen as a key pillar of societal strength; it is provisional and valuable only insofar as it does not challenge the primary of the Communist Party.

The United States, by contrast, has championed an open society as the ultimate guarantor of human liberty and prosperity, and as one of the most robust checks on the untrammeled exercise of government or corporate power. We can debate if openness and free speech do in fact serve those functions, but they at the very least make exercising control more difficult. And the sheer noisy vibrancy of American society has been a notable contrast to many other countries over time and one of the hallmarks of a democracy that has allowed individuals to say and do what they choose.

That has, in turn, been the fuel for a rich culture of innovation and creativity, scientifically and artistically, including the invention and commercialization of the cyber world that we all now inhabit. TikTok may be a Chinese app, but it is built on American innovation.

But if TikTok as a social media app par excellence is in essence a manifestation of American strength, banning TikTok is in essence a mimicking of Chinese policy. China has created its own internal intraweb and erected its Great Fire Wall to keep unwelcome information out of the public sphere. The Chinese government, with its legion of censors, polices what can be said and how, and punishes those who deviate too far from accepted parameters. That has only increased after the countrys zero-Covid policies that relied on mass surveillance of smart phones to control the movement of Chinese citizens. The efforts to control 1.5 billion people, what they say and how they say it publicly, are one way that the party retains control in China. It is a source of their strength.

The United States will never be able to compete with China in censoring information, nor should it. But it could undermine its own vitality as an open society if it heads down the path of trying to ban apps in the name of national security. The wave of blacklists and McCarthy era crackdowns on Americans who professed Communist and even socialist beliefs and sympathies did not make the United States more secure in the early days of the Cold War; it made the country more paranoid and brittle, undermined creativity and the free flow of scientific information and briefly threatened to undermine the stability of the very government agencies such as the State Department and the Defense Department that were tasked with preserving national security.

America does not do suppression of free speech particularly well, which is a good thing. And we should not optimize for a future where we do it better by making a new go at censorship. For the United States, the risks of TikTok are far outweighed by the risks of banning TikTok.

See the original post:
Opinion | The Extraordinarily Misguided Attack on TikTok - POLITICO