Archive for the ‘Social Networking’ Category

Problem drinking linked to alcohol on social media – University of Queensland

A University of Queensland study highlights a direct link between young peoples exposure to alcohol-related social media content and problem drinking.

The study led by PhD candidate Brandon (Hsu-Chen) Cheng from UQs Australian National Centre for Youth Substance Use Research examined results from 30 international studies of more than 19,000 people aged 24 and younger.

We investigated the effects of exposure to alcohol-related social media content and also alcohol-related posts on their own social media profiles, Mr Cheng said.

Our study showed young people who were exposed to alcohol-related content on social networking sites consumed more alcohol and drank more frequently than those who did not.

We also found exposure was linked with problem drinking behaviours, such as binge drinking, which is detrimental to physical and mental health.

Social networking sites are not just promoting alcohol consumption, but also encouraging young people to engage in dangerous drinking behaviours.

Professor Jason Connor, Director of the National Centre for Youth Substance Use Research, said alcohol consumption is one of the leading risk factors of unintentional injury, self-harm, sexual assault, alcohol overdose and death in young people.

There is overwhelming evidence for tightening regulations on alcohol-related media on social networking sites, Professor Connor said.

Most social media sites are self-regulated, but this has proven to be ineffective, and it can make enforcing restrictions challenging.

For example, the minimum required age to use social media platforms is rarely confirmed by the sites or it can vary.

Preventive measures, like tightening regulations and educating young people and their parents, can help discourage underage teenagers and young adults from engaging in high-risk drinking behaviours.

This will ultimately reduce the considerable disease burden of alcohol use in Australia in one of our most vulnerablepopulation groups.

The study is published in Addiction and discussed in an Addiction podcast.

Media: Faculty of Health and Behavioural Sciences Communications, habs.media@uq.edu.au, +61 435 221 246 @UQHealth

The rest is here:
Problem drinking linked to alcohol on social media - University of Queensland

Fact Checkers Take Stock of Their Efforts: ‘It’s Not Getting Better’ – The New York Times

After President Biden won the election nearly three years ago, three of every 10 Americans believed the false narrative that his victory resulted from fraud, a poll found. In the years since, fact checkers have debunked the claim in lengthy articles, corrections posted on viral content, videos and chat rooms.

This summer, they received a verdict on their efforts in an updated poll from Monmouth University: Very little has changed. Three of every 10 Americans still believed the false narrative.

With a wave of elections expected next year in dozens of countries, the global fact-checking community is taking stock of its efforts over a few intense years and many dont love what they see.

The number of fact-checking operations at news organizations and elsewhere has stagnated, and perhaps even fallen, after a booming expansion in response to a rise in unsubstantiated claims about elections and the pandemic. The social networking companies that once trumpeted efforts to combat misinformation are showing signs of waning interest. And those who write about falsehoods around the world are facing worsening harassment and personal threats.

Its not getting better, said Tai Nalon, a journalist who runs Aos Fatos, a Brazilian fact-checking and disinformation-tracking company.

Elections are scheduled next year in more than 5,500 municipalities across Brazil, which a few dozen Aos Fatos fact checkers will monitor. The idea exhausts Ms. Nalon, who has spent recent years navigating a disinformation-peddling president, bizarre theories about the pandemic, and an increasingly polluted online ecosystem rife with harassment, distrust and legal threats.

Ms. Nalons organization, one of the leading operations of its kind in Brazil, started in 2015 as attention to the fight against false and misleading content online surged. It was part of a fact-checking industry that bloomed around the world. At the end of last year, there were 424 fact-checking websites, up from just 11 in 2008, according to an annual census by the Duke University Reporters Lab.

The organizations used an arsenal of old and new tools: fact checks, pre-bunks that tried to inform viewers against misinformation before they encountered it, context labels, accuracy flags, warning screens, content removal policies, media literacy trainings and more. Facebook, which is owned by Meta, helped spur some of the growth in 2016 when it started working with and paying fact-checking operations. Online platforms, like TikTok, eventually followed suit.

Yet the momentum seems to be idling. This year, only 417 sites are active. The addition of new sites has slowed for several years, with just 20 last year compared with 83 in 2019. Sites such as the Baloney Meter in Canada and Fakt Ist Fakt in Austria have gone quiet in recent years.

The leveling-off represents something of a maturing of the field, said Angie Drobnic Holan, the director of the International Fact-Checking Network, which the nonprofit Poynter Institute started in 2015 to support fact checkers worldwide.

The work continues to draw interest from new parts of the world, and some think tanks and good-government groups have begun offering their own fact-checking services, experts said. Harassment and government repression, however, remain major deterrents. Political polarization has turned fact-checking and other misinformation defenses into a target among right-wing influencers, who claim that debunkers are biased against them.

Yasmin Green, chief executive of Jigsaw, a group within Google that studies threats like disinformation and extremism, recalled one study in which a participant scrolled past a fact check shared by a journalist from CNN and dismissed it out of hand. Well, who fact-checks the fact checkers? the user asked.

Were in this highly distrustful environment where youre evaluating just on the basis of the speaker and distrusting people who you decided their judgment is not trustworthy, Ms. Green said.

Intervening against misinformation has a broadly positive effect, according to researchers. Experiments conducted in 2020 concluded that fact checks in many parts of the world reduced false beliefs for at least two weeks. A team at Stanford determined that education about misinformation after the 2016 election had probably contributed to fewer Americans visiting websites in 2020 that were not credible.

Success, however, is inconsistent and contingent on many variables: the viewers location, age, political leaning and level of digital engagement, and whether a fact check is written or illustrated, succinct or explanatory. Many efforts never reach crucial demographics, while others are ignored or resisted.

After falsehoods swarmed Facebook during the pandemic, the platform instituted policies against Covid-19 misinformation. Some researchers, however, questioned the effectiveness of the efforts in a study published this month in the journal Science Advances. They determined that while the amount of anti-vaccine content had declined, engagement with the remaining anti-vaccine content had not.

In other words, users engaged just as much with anti-vaccine content as they would have if content had not been deleted, said David Broniatowski, a professor at George Washington University and an author of the paper.

The remaining anti-vaccine content was more likely to be misleading, researchers found, and users linked to less trustworthy sources than they did before Facebook put its policies in place.

Our integrity efforts continue to lead the industry, and we are laser-focused on tackling industrywide challenges, Corey Chambliss, a spokesman for Meta, said in an emailed statement. Any suggestion to the contrary is false.

In the first six months of this year, more than 40 million Facebook posts received a fact-check label, according to a report that the company submitted to the European Commission.

Social platforms where false narratives and conspiracy theories still spread widely have scaled back anti-disinformation resources over the past year. Researchers found that fact-checking organizations and similar outlets grew gradually more dependent on social media companies for a financial lifeline; misinformation watchers now worry that increasingly budget-conscious tech companies will start reducing their philanthropy spending.

If Meta ever cuts the budget for its third-party fact-checking program, it could decimate an entire industry of fact checkers that depend on its financial support, said Mr. Roth, now a visiting scholar at the University of Pennsylvania. (Meta said its commitment to the program had not changed.)

X has undergone some of the most significant changes of any platform. Its billionaire owner of less than a year, Elon Musk, embraced an experiment that relied on its own unpaid users rather than paid fact checkers and safety teams. The expanded fact-checking program Community Notes allows anyone to write corrections on posts. Users can deem a note helpful so it becomes visible to everyone; some notes have appeared alongside content from Mr. Musk and President Biden and even a viral post about a groundhog falsely accused of stealing vegetables.

X did not respond to a request for comment. Tech watchdogs fretted this week about the quality of content on X after The Information reported that the platform was cutting half the team dedicated to managing disinformation about election integrity; the company had said less than a month earlier that it planned to expand the team.

Crowdsourced fact-checking has shown mixed results in research, said Valerie Wirtschafter, a fellow at the Brookings Institution. An article she co-wrote in The Journal of Online Trust and Safety found that the presence of a Community Note did not keep posts from spreading widely. Users who created misleading posts saw no change in the engagement for subsequent posts, suggesting that they paid no penalty for sharing falsehoods.

Since most popular posts on X get a surge in attention within the first few hours, a Community Note added hours or days later would do little to reach people who had read the falsehoods, said Mr. Roth, who resigned from the company after Mr. Musks arrival last year.

Ive never found a way around having humans in the loop, he said in an interview. My belief, and everything Ive seen, is that on its own, Community Notes is not a sufficient replacement.

Defenders against false narratives and conspiracy theories are also struggling with another complication: artificial intelligence.

The technologys reality-warping abilities, which still manage to stump many of the tools designed to identify their use, are already keeping fact checkers busy. Last week, TikTok said it would test an A.I.-generated label, automatically appending it to content detected as having been edited or created with the technology.

Tests are also being run using A.I. to quickly parse the enormous volume of false information, identify frequent spreaders and respond to inaccuracies. The technology, however, has a shaky track record with truth. After the fact-checking organization PolitiFact tested ChatGPT on 40 claims that had already been meticulously researched by human fact checkers, the A.I. either made a mistake, refused to answer or arrived at a different conclusion from the fact checkers half of the time.

Between new technologies, fluctuating policies and stressed watchdogs, the online information ecosystem is in its messy adolescent years its gangly, and its got acne, and its moody, said Claire Wardle, a co-director of the Information Futures Lab at Brown University.

She is hopeful, however, that society will learn to adapt and that most people will continue to value accuracy. Misinformation during the 2022 midterm elections was less toxic than feared, thanks partly to media literacy efforts and training that helped the authorities respond far more quickly and aggressively to rumors, she said.

We tend to get obsessed with the very worst conspiracies the people who got radicalized, she said. Actually, the majority of audiences are pretty good at figuring this all out.

Audio produced by Adrienne Hurst.

Go here to see the original:
Fact Checkers Take Stock of Their Efforts: 'It's Not Getting Better' - The New York Times

Threads still poses a threat to X despite slowed growth, analysis finds – Marketing Dive

Dive Brief:

At its launch in July, Metas Threads took the social networking landscape by storm, reaching 100 million members in record time to become the fastest growing app of all time. However, the meteoric growth quickly fell back to earth, tasking Meta with finding additional ways to make the offering stand out in an increasingly saturated market,and theres still a long way to go. Threads is expected to round out the year with 23.7 million U.S. users, equating to just 10.4% of social network users and 17.5% of Instagram users, according to Insider Intelligence.

While Threads is expected to see slowed growth, it still is expected to eventually close the gap with Elon Musks X, a platform that is expected to see declines in users which could help Threads catch up, per the forecast. Specifically, X this year will have 56.1 million U.S. users, however, by 2025, that number will have dropped to 47 million credited to user concerns over the stability of the platform and its content, according to Insider Intelligence, an observation that has followed a long stretch of criticisms of the platform since Musks takeover. Other changes, like the possibility of a user subscription fee, could drive even more users off X, though Threads shouldnt count on the platforms questionable shifts as a sustainable growth strategy.

Threads received an initial boost from Twitters missteps, but it cant rely on X defectors to continue to grow, Enberg said in the report. Still, Musks recent announcement to charge all X users a monthly subscription fee could open up a clearer avenue for Meta to monetize Threads.

Since its debut, Threads has been working on a number of updates to stay relevant, recently testing basic features like post editing something X (then Twitter) refused to add for years along with account switching and profile deletion. Additionally, the platform earlier this month added additional desktop functionality to its new web version of the app. Added features could help Threads support both its growth and user engagement, the latter of which Instagram chief Adam Mosseri recently cited as a core challenge.

Though Threads could catch up with X in the coming years, TikTok poses the stiffest competition for Meta, with the ByteDance platform forecast to remain the third most popular social app behind Facebook and Instagram through 2025, per the report. Popularized for its short-form video format, TikTok is also the preferred social platform among the key Gen Z demographic, spurring lookalikes from competitors like Instagram and YouTube. For Threads to stand up against TikTok, it would first require a more defined identity, according to Enberg.

For Threads to carve a long-lasting place in the social landscape, it needs to figure out what it wants to be when it grows up. It must also do so fast: Meta isnt above ditching new apps or folding them into existing services. And Threads identity must be more than an extension of Instagram or an alternative to X, Enberg said in the report

Originally posted here:
Threads still poses a threat to X despite slowed growth, analysis finds - Marketing Dive

Supreme Court will look at new state laws that attempt to control … – WREX.com

Washington (CNN) The Supreme Court will leap into online moderation debate for the second year running after the justices on Friday agreed to decide whether states can essentially control how social media companies operate.

The decision to consider laws passed in 2021 by Texas and Florida could have nationwide repercussions for how social media and all websites display user-generated content.

If upheld, the laws could open the door to more state legislation requiring platforms such as Facebook, YouTube and TikTok to treat content in specific ways within certain jurisdictions and potentially expose the companies to more content moderation lawsuits.

It could also make it harder for platforms to remove what they determine is misinformation, hate speech or other offensive material.

These cases could completely reshape the digital public sphere. The question of what limits the First Amendment imposes on legislatures ability to regulate social media is immensely important for speech, and for democracy as well, said Jameel Jaffer, the executive director of Columbia Universitys Knight First Amendment Institute, in a statement.

Its difficult to think of any other recent First Amendment cases in which the stakes were so high, Jaffer added.

The state laws at issue authorize users to sue social media platforms over allegations of political censorship. And they restrict companies from taking down or demoting certain kinds of content even when the platforms may decide it violates their terms of service.

State officials have argued that the laws are needed to protect users freedom of speech on online platforms, particularly for conservatives. But industry trade groups have challenged the laws as a violation of tech companies First Amendment rights.

Federal appeals courts have split on the matter. Last year, the 5th US Circuit Court of Appeals upheld the Texas law, while the 11th US Circuit Court of Appeals blocked the Florida law as unconstitutional.

Now, the Supreme Court intends to issue the final word.

The-CNN-Wire & 2023 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.

CNNs Devan Cole contributed to this report.

Read more from the original source:
Supreme Court will look at new state laws that attempt to control ... - WREX.com

New York Bans Employers From Requiring Disclosure of Personal … – Perkins Coie

New York Governor Kathy Hochul signed into law bill A836 on September 14, 2023, prohibiting employers from requesting or requiring employees or job applicants to disclose the login credentials for their personal social media accounts, or from retaliating against employees or job applicants who refuse to do so. Specifically, the law renders it unlawful for an employer to request, require, or coerce any employee or job applicant to:

It is similarly unlawful for an employer to take any adverse action against an employee or to refuse to hire an applicant because the individual refused to provide the above-noted information.

Importantly, the law broadly defines the term employer as any person or entity engaged in a business, industry, profession, trade or other enterprise in [New York], as well as any agent, representative or designee of the employer. Accordingly, the laws impact will likely be widely felt.

Permitted Activity

Even though the new law is concerned with prohibiting employers from requiring disclosure of an employee or applicants personal login credentials, employers may continue to view information on an individuals personal social media account that is publicly available. Indeed, the law expressly does not prohibit or restrict an employer from viewing, accessing, or utilizing information about an employee or applicant that can be obtained without any required access information, that is available in the public domain, or for the purposes of obtaining reports of misconduct or investigating misconduct, photographs, video, messages, or other information that is voluntarily shared by an employee, client, or other third party that the employee subject to such report or investigation has voluntarily given access to contained within such employees personal account. Similarly, the law does not prohibit an employer from requesting that an employee or applicant disclose their social media usernames, only their usernames and passwords.

The law also provides that employers may still lawfully:

The law also provides that it will be an affirmative defense to any legal action under the law that the employer acted to comply with requirements of federal, state, or local law.

Next Steps for Employers

The law will take effect on March 12, 2024. In anticipation of the new law, employers should closely assess their social media policies and speak with experienced counsel to implement appropriate internal procedures and devise required notices and acknowledgements to ensure compliance with the laws requirements.

2023 Perkins Coie LLP

See original here:
New York Bans Employers From Requiring Disclosure of Personal ... - Perkins Coie