Media Search:



Creating Censorship-Resistant Ethscriptions on the Ethereum … – The Coin Republic

Ethscriptions provide a novel way for anyone to permanently engrave content like text, images, and documents into the Ethereum blockchain. By publishing data as ethscriptions, you can securely log information in Ethereums decentralized ledger forever. While the process involves some technical steps, this guide will walk through how to create censorship-resistant ethscriptions from start to finish.

Youll need an Ethereum account and wallet configured to interact with the network to get started. Leading browser-based options like MetaMask, Coinbase Wallet, or MyEtherWallet allow easy access without running a full node. Be sure to fund your wallet with a small amount of ETH to cover the miner fees required to publish transactions.

Youll also want to have the text, images, or other content prepared that you wish to upload. Ethscription platforms can encode small files like text documents, JPGs, and PDFs. Have this data saved locally on your computer for the next step.

Now you must choose a suitable ethscription platform to publish the data to Ethereum. Services like Etherscribe, Ethegra, and Stampd allow creating ethscriptions in just a few clicks. Compare features like file formats accepted, cost structure, and ease of use.

Connect your wallet to the selected platform to sign transactions on your behalf. Double-check that the correct wallet is linked to avoid errors. Most platforms will detect wallets like MetaMask automatically.

Once your wallet is connected to the platform, you can upload your content. Paste in any text you wish to immortalize or drag and drop your selected image or document file. Give the ethscription a short title and description for reference, Then review all the details to ensure the data is correct. Then the platform will encode your content and generate the necessary transaction to add it to Ethereum.

submit the transaction to finalize publishing the ethscription. Your connected wallet will need to sign the transaction before it broadcasts to the network; within a minute or two, the ethscription will be included in a block and immutably saved to Ethereum.

Once sealed within the blockchain, the platform will provide a transaction hash you can use to view the ethscription via blockchain explorers like Etherscan. This link lets anyone access and verify your engraved content.

The beauty of ethscriptions is that once published to Ethereum, your engraved content is permanently sealed into the blockchains immutable ledger. This provided indefinite proof of your documents or medias existence when it was ethscribed. The decentralized nature of Ethereum ensures no single entity can ever censor, restrict access, or delete your ethscription. It will remain uncensorable and public forever.

This makes ethscriptions a powerful tool for immortalizing personally or professionally important information in a way that cannot be altered or suppressed. The far-reaching potential to indelibly preserve memories, records, and data for posterity makes ethscriptions on Ethereum so revolutionary.

Andrew is a blockchain developer who developed his interest in cryptocurrencies while his post-graduation. He is a keen observer of details and shares his passion for writing along with being a developer. His backend knowledge about blockchain helps him give a unique perspective to his writing

View post:
Creating Censorship-Resistant Ethscriptions on the Ethereum ... - The Coin Republic

A mess of censorship – Chronicle Times

By Erin Rydgren | on August 12, 2023

Book banning is bad business, as school and city officials are finding out in Alta. The school will need to ban potentially hundreds of books if they refer to sex under a new state law. That means that the Alta Municipal Library, which shares its stacks with the Alta-Aurelia School District, would have to do a massive purge of its 21,000 books (60% of which are the citys). This has prompted the city to think about establishing its own separate library just so a 7th grader doesnt have access to a book like Catcher in the Rye. That is not hyperbole. The book is on a list of 347 proposed for censorship in the Urbandale School District. A different list of banned books, with similar classics tagged, is circulating in the Norwalk School District. Its hard to imagine what Alta-Aurelia might come up with. We could have more than 300 sets of rules depending on the school district and how prudish an influential set of patrons are with the school board. We recall several years ago leading Republican legislators declaring that you could not have local control over livestock confinement because you would have 99 sets of rules, and that would be a mess for the pork industry. Yet the same party thinks it is okay to make school boards into a censorship authority. The Iowa Department of Education, under the guidance of Gov. Reynolds who cooked up this law, refuses to issue regulations for school districts to follow. Everyone is on their own. Thats not leadership, its chaos. And it is wasteful. The city and school district had a nice thing going, sharing facilities and staff. It saved money. It created a program the city probably could not afford on its own. Were pretty sure no innocent eyes were exposed to anything of prurient interest that they could not otherwise find on their cellphone or on TV during primetime. Alta and Aurelia always have been able to establish community standards and did not need the assistance of the governor and legislature. Sen. Lynn Evans, a Republican who supported the book-banning bill, is an Aurelia native and former superintendent of schools. He is confident that the city and school district will work something out. He thinks there is a way to cordon off adult books from sixth graders and the like. The city is not necessarily as optimistic. We certainly appreciate the citys anxiety over trashing much of its collection. Its a shame that the legislature didnt think this through. Its too bad we let partisans or holy rollers write our curriculum standards instead of trained educators. You would think the University of Northern Iowa or Buena Vista were grooming socialists and perverts to run our schools. The Department of Education is derelict to just ignore it. Republicans created a mess for their core constituency: rural Iowa. This is what Alta and Aurelia get a big headache from stupidity and zeal.

Original post:
A mess of censorship - Chronicle Times

Critical thinking education trumps banning and censorship in battle … – PsyPost

A new study conducted by researchers from Michigan State University suggests that the battle against online disinformation cannot be won by content moderation or banning those who spread fake news. Instead, the key lies in early and continuous education that teaches individuals to critically evaluate information and remain open to changing their minds.

The study was recently featured in SIAM News, a publication of the Society for Industrial and Applied Mathematics (SIAM).

Disinformation is one of the most important problems of modern times and is poised to worsen as the power of AI increases. Our research group develops models for the spread of contagions, so disinformation, like disease, is a natural topic, explained study author Michael Murillo, a professor in the Department of Computational Mathematics, Science and Engineering.

The researchers used a type of math called agent-based modeling to simulate how peoples opinions change over time. They focused on a model where individuals can believe the truth, the fake information, or remain undecided. The researchers created a network of connections between these individuals, similar to how people are connected on social media.

They used the binary agreement model to understand the tipping point (the point where a small change can lead to significant effects) and how disinformation can spread.

They tested three main disinformation mitigation strategies under consideration by the U.S. Congress: content moderation (such as banning those who spread fake news), public education (teaching people to fact-check and be skeptical), and counter campaigns (promoting groups committed to spreading the truth).

The researchers implemented each strategy in the simulated environment to test its effectiveness. They created thousands of small networks representing different types of social connections and applied mathematical rules to simulate real-world scenarios.

Disinformation is an important problem that policy makers are attempting to address, Murillo told PsyPost. We have developed models to simulate the spread of disinformation to test various mitigation strategies. From the mathematics and many thousands of simulations, we are able to assess the most fruitful strategies.

The researchers found that found that if just 10% of the population strongly believes in disinformation, the rest may follow suit. The findings suggest that disinformation spreads easily because people naturally want to believe things that align with their existing beliefs.

Teaching people to recognize their biases, be more open to new opinions, and be skeptical of online information proved the most effective strategy for curbing disinformation.

Early education (teaching people to be skeptical and question information early on, before they form strong opinions) had the most significant effect on stopping disinformation. Late education (trying to correct peoples beliefs after they have already formed opinions) was not as effective as early education but still had some impact.

Strategies like removing people who share fake content or creating counter campaigns were not as effective as education. The researchers explained that even though these strategies might seem like quick solutions, they dont work as well in the long run.

We were surprised, and disheartened, by how difficult this problem is, Murillo said. If one guesses the cost and time to implement strategies, such as broad education on critical thinking and education, we are looking at a generational-scale problem.

As with all research, the new study includes some caveats.

We deliberately created a parsimonious model to uncover the essential factors at play; however, much more detail could be added to better match specific situations, Murillo explained. Also, many proposed strategies are only band-aids that treat the symptom, such as labeling videos in YouTube, but do not address the underlying cause that may be related to a social or political issue.

More research is needed to understand how and why people are drawn toward disinformation in general, Murillo added. People tend to be drawn toward sensationalist ideas, which empower and gives advantage to the sources of disinformation. Given improved knowledge of this aspect of human nature, we can enhance our models and policy makers could perhaps develop more optimal seat belts to control the spread of disinformation.

The study, Evaluating the Effectiveness of Mitigation Policies Against Disinformation, was authored by David J. Butts and Michael S. Murillo.

Visit link:
Critical thinking education trumps banning and censorship in battle ... - PsyPost

The Censors Down Under: The ACMA Gambit on Misinformation … – International Policy Digest

In January 2010, the then-U.S. Secretary of State Hillary Clinton, doing what she does best, grasped a platitude and ran with it in launching, of all things, an institution called the Newseum. Information freedom, she declared, supports the peace and security that provide a foundation for global progress.

The same figure has encouraged the prosecution of such information spear carriers as Julian Assange, who dared give the game away by publishing, among other things, documents from the State Department and emails from Clintons own presidential campaign in 2016 that cast her in a rather dim light. Information freedom is only to be lauded when it favours your side.

Who regulates, let alone should regulate, information disseminated across the Internet remains a critical question. Gone is the frontier utopianism of an open, untampered information environment, where bright and optimistic netizens could gather, digitally speaking, in the digital hall, the agora, the square, to debate, to ponder, to dispute every topic there was. Perhaps it never existed, but for a time, it was pleasant to even imagine it did.

The shift towards information control was bound to happen and was always going to be encouraged by the greatest censors of all: governments. Governments untrusting of the posting policies and tendencies of social media users and their facilitators have been, for some years, trying to rein in published content in a number of countries. Cyber-pessimism has replaced the cyber-utopians. Social media, remarked science writer Annalee Newitz in 2019, has poisoned the way we communicate with each other and undermined the democratic process. The emergence of the terribly named fake news phenomenon adds to such efforts, all the more ironic given the fact that government sources are often its progenitors.

To make things even murkier, the social media behemoths have also taken liberties with what content they will permit on their forums, using their selective algorithms to disseminate information at speed even as they prevent other forms of it from reaching wider audiences. Platforms such as Facebook and Twitter, heeding the call of the very screams and bellows of their own creation, thought it appropriate to exclude or limit various users in favour of selected causes and more sanitised usage. In some jurisdictions, they have become the surrogates of government policy under threat: remove any offending material, or else.

Currently under review in Australia is another distinctly nasty example of such a tendency. The Communications Legislation Amendment (Combating Misinformation and Disinformation) Bill 2023 is a proposed instrument that risks enshrining censorship by stealth. Its exposure draft is receiving scrutiny from public submissions till August. Submissions are sought on the proposed laws to hold digital platform services to account and create transparency around their efforts in responding to misinformation and disinformation in Australia.

The Bill is a clumsily drafted, laboriously constructed document. It is outrageously open-ended on definitions and a condescending swipe to the intelligence of the broader citizenry. It defines misinformation as online content that is false, misleading or deceptive, that is shared or created without an intent to deceive but can cause and contribute to serious harm. Disinformation is regarded as misinformation that is intentionally disseminated with the intent to deceive or cause serious harm.

The bill, should it become law, will empower the Australian Communications and Media Authority (ACMA) to monitor and regulate material it designates as harmful online misinformation and disinformation. The Big Tech fraternity will be required to impose codes of conduct to enforce the interpretations made by the ACMA, with the regulator even going so far as proposing to create and enforce an industry standard. Those in breach will be liable for up to $5 million or 5% of global turnover for corporations.

What, then, is the harm? Examples are provided in the Guidance Note to the Bill. These include hatred targeting a group based on ethnicity, nationality, race, gender, sexual orientation, age, religion, or physical or mental disability. It can also include disruption to public order or society, the old grievance the State has when protestors dare differ in their opinions and do the foolish thing by expressing them. (The example provided here is the mind of the typical paranoid government official: Misinformation that encouraged or caused people to vandalise critical communications infrastructure.)

John Steenhof of the Human Rights Law Alliance has identified, correctly, the essential, dangerous consequence of the proposed instrument. It will grant the ACMA a mechanism what counts as acceptable communication and what counts as misinformation and disinformation. This potentially gives the state the ability to control the availability of information for everyday Australians, granting it power beyond anything that a government should have in a free and democratic society.

Interventions in such information ecosystems are risky matters, certainly for states purporting to be liberal democratic and supposedly happy with debate. A focus on firm, robust debate, one that drives out poor, absurd ideas in favour of richer and more profound ones, should be the order of the day. But we are being told that the quality of debate, and the strength of ideas, can no longer be sustained as an independent ecosystem. Your information source is to be curated for your own benefit, because the government class says its so. What you receive and how you receive, is to be controlled paternalistically.

The ACMA is wading into treacherous waters. The conservatives in opposition are worried, with Shadow Communications Minister David Coleman describing the draft as a very bad bill giving the ACMA extraordinary powers. It would lead to digital companies self-censoring the legitimately held views of Australians to avoid the risk of massive fines. Not that the conservative coalition has any credibility in this field. Under the previous governments, a relentless campaign was waged against the publication of national security information. An enlightened populace is the last thing these characters, and their colleagues, want.

If you're interested in writing for International Policy Digest - please send us an email via submissions@intpolicydigest.org

See more here:
The Censors Down Under: The ACMA Gambit on Misinformation ... - International Policy Digest

Elon Musk says Tesla cars now have a mind, figured out ‘some aspects of AGI’ – Electrek

Elon Musk claims that Tesla may have figured out some aspects of AGI as he believes that Tesla vehicles now have a mind.

The CEO has said several times that he believes most of Teslas value is attached to self-driving, ad he says Tesla could achieve it by the end of the year.

The Tesla community is divided between believers who think the automaker is indeed about to deliver on its long-stated promise, and people who have been burned too many times by missed timelines and think a robotaxi service from Tesla is still years away.

Thats why we are tracking the effort really closely and see if theres any chance Tesla can make Musks prediction true with just a few months left in the year.

On X (formerly Twitter), Musk often shines a spotlight on some of those true believers who only show the good performance of Teslas FSD Beta. This week, he commented on one of those by claiming that he believes Tesla have figured out some aspects of AGI:

I think we may have figured out some aspects of AGI. The car has a mind. Not an enormous mind, but a mind nonetheless.

AGI stands for artificial general intelligence. Musk has said that he believes Tesla might play a role in achieving AGI through its self-driving program.

Unlike some other self-driving programs, Tesla relies heavily on camera-based vision and neural nets to power its system. The company believes that this approach is closer to how human drives and could be transferred to other autonomous products, like its Optimus robot.

The guy may not be wrong. In my last review of FSD Beta, I noted that it drives like a first-time 14-year-old driver who sometimes does hard drugs.

I also noted that while this might sound like an insult to Teslas system, I wouldnt know the first thing about making a car drive autonomously at the level of a 14-year-old driver who sometimes does hard drugs. Therefore, I believe its an achievement in itself.

Now does it mean that Tesla cars have a mind equivalent to a 14-year-old who sometimes does hard drugs while driving? Probably not, but I can see his point.

If you have been following my reporting on FSD, you know that Im not the most optimistic about the program. However, I have some hope that updating the vehicle control with new neural nets and the new computing power that comes with the Dojo supercomputer could greatly accelerate the pace of improvements.

AGI, though? Im skeptical but open-minded.

FTC: We use income earning auto affiliate links. More.

The rest is here:

Elon Musk says Tesla cars now have a mind, figured out 'some aspects of AGI' - Electrek