Media Search:



Who is Satoshi Nakamoto, the inventor of Bitcoin? – The Coin Republic

Mr. Pratik chadhokar is an Indian Forex, Cryptocurrencies and Financial Market Advisor and analyst with a background in IT and Financial market Strategist. He specialises in market strategies and technical analysis and has spent over a year as a financial markets contributor and observer. He possesses strong technical analytical skills and is well known for his entertaining and informative analysis of the Financial markets.

The identity of Satoshi Nakamoto is one of the most tantalizing mysteries in the tech world. This shadowy figure launched the first decentralized digital currency in 2008 and then disappeared into thin air. Pretty intriguing, right?

One cant help but love a good mystery. And Satoshi is one of the most intriguing mysteries the tech industry has seen. This enigmatic character invented a groundbreaking technology that is changing finance forever. But despite having such a huge impact, Satoshi managed to stay completely anonymous. And in a world where everything is traceable and recorded, this makes the Satoshi saga all the more captivating. It seems Satoshi must have gone to great lengths to cover their tracks and hide any biographical details that could reveal their identity. It makes one wonder what were they trying to hide?

Some speculate Satoshi has an extensive background in computer science and math. This seems plausible just look at how complex and brilliantly designed Bitcoin is! All signs point to Satoshi being a technical genius, if not something more, like a time traveller from the future. But the latter remains just a theory for now. Whats clear is that the complexity of Bitcoins design shows its creator possesses an exceptional mind. After unleashing Bitcoin on the world, Satoshi stuck around for a while, communicating with other developers working on the protocol. But by 2011, poof! Satoshi disappeared just as mysteriously as they had arrived. This sudden departure left the crypto community scratching their heads, wondering if Satoshi had moved on to other projects or just preferred to remain an enigma.

Now everyone has their own theories on who Satoshi really is. People have studied their writing style, followed clues, trying to unlock the mystery have often felt that theyve gotten close, but still never gotten a definitive answer. The clues about Satoshis identity go round and round with no definitive answer landing.

Part of what makes Satoshi such a fascinating character is their anonymity. It was clearly important to stay in the shadows, even as Bitcoin took off. It makes one think they were on to something big like the crypto revolution we now live in. Satoshis anonymity serves as a reminder of the power of privacy and the impact it can have.

Whoever Satoshi really is, they sparked something huge. Bitcoin has exploded in popularity and changed finance forever. Bitcoin is now accepted at a myriad of companies. Wikipedia, Microsoft and AT&T all accept payments and donations through Bitcoin. Not too odd you might say. However, over the past few years in Canada weve seen Burger King, Subway and even KFC now accept the cryptocurrency. Part of this is due to the leader of the opposition, Poilievre, being a very vocal pro-crypto force within the country. Along with this weve also seen it become key for online gaming, with skins and microtransactions becoming a use for Bitcoin. Online casinos in the country have also gotten in on the action, with many now also allowing Bitcoin for deposits. However, if youre not too confident with your cryptocurrency, you can always still stop in at real money online casinos for a more traditional experience. Not insignificant changes considering it came from an anonymous inventor! Even if Satoshis identity never surfaces, they have cemented their place in history. That Bitcoin was created and continues to grow as it has is testament to Satoshis vision and technical brilliance.

The legend of Satoshi will keep growing as Bitcoin continues to make waves. But hopefully their true identity emerges eventually! A character that is this intriguing deserves their full story told. For now, Satoshi remains one of techs most compelling pseudonymous figures. What an exit that was!

Read the original here:

Who is Satoshi Nakamoto, the inventor of Bitcoin? - The Coin Republic

Opinion: I asked AI about myself. The answers were all wrong – The Virginian-Pilot

My interest in artificial intelligence piqued after a colleague told me he was using it for research and writing. Before I used AI for my own work, I decided to test its authenticity with a question I could verify. I asked OpenAIs ChatGPT about my own identity expecting a text version of a selfie. After a week of repeating the same question, the responses were confounding and concerning.

ChatGPT answered who is Philip Shucet by listing 15 distinct positions I supposedly held at one time or another. The positions included specific titles, job responsibilities and employment dates. But only three of the 15 jobs were accurate. The other 12 were fabrications; the positions were real, but I was never in any of them. The misinformation included jobs in two states I never lived in, as well as a congressional appointment to the Amtrak Review Board. How could AI be so wrong?

Although newsrooms, boardrooms and classrooms are buzzing with stories, AI is not new. The first chatbot, Eliza, was created in 1966 by Joseph Weizenbaum at MIT. Weizenbaum, who died in 2008, became skeptical of artificial intelligence, telling the New Age Journal in 1985, The dependence on computers is merely the most recent, and the most extreme, example of how man relies on technology in order to escape the burden of acting as an independent agent.

Was Weizenbaum sending a warning that technology might make us lazy?

In an interview about AI on a March segment of 60 Minutes, Brad Smith, president of Microsoft, told Leslie Stahl that a benefit of AI could be, looking at forms to see if theyve been filled out correctly. But what if the form is a resume created by AI? Can AI check its own misinformation? What happens when an employment record is tainted with false information created by AI? Can job recruiters rely on AI queries? Can employers rely on recruiters who use AI? And who is accountable when someone is hired based on misinformation generated by a machine and not by a human?

In the same 60 Minutes segment, Ellie Pavlik, an assistant professor at Brown, told Stahl, It (AI) doesnt really understand what it is saying is wrong. If AI doesnt know when it is wrong, how can anyone rely on AI to be correct?

In May, two New York attorneys used ChatGPT to write a court brief. The brief cited misinformation from cases that didnt exist. Schwartz told the judge that he failed miserably to do his own research to make sure the information was correct. The judge fined each attorney $5,000.

I asked ChatGPT about the consequences of giving out bad information. ChatGPT answered by saying that false information results in misrepresentation, confusion, legal concerns, emotional distress and erodes trust in AI. If ChatGPT understands the implications of false information, why does it continue to provide fabrications when a search engine could easily provide correct information? Because, as I know now, ChatGPT is not a search engine. I know because I asked.

ChatGPT says it is a language model designed to understand and generate human-like text based on input. ChatGPT says it doesnt crawl the web or search the Internet. Instead, it generates responses based on patterns and information it learned from the text it was trained on.

If AI needs to be trained, then theres a critical human element of accountability we cant ignore. So I started training ChatGPT by correcting it each time it answered with false information. After a week of training, ChatGPT was still returning a mix of accurate and inaccurate information, sometimes repeating fabrications. Im still sending back correct information, but Im ready to bring this experiment to an end for now.

This wasnt a test of ego, it was a test of reliability and trust. A 20% accuracy rate is a failing grade.

In 1976, Weizenbaum wrote, No other organism, and certainly no computer, can be made to confront genuine human problems in human terms. Im not a luddite. But as technology continues to leap forward further and faster, lets remember that we are in control of the information that defines us. We are the trainers.

Philip Shucet is a journalist. He previously held positions as the commissioner of VDOT, president and CEO of Hampton Roads Transit, and CEO of Elizabeth River Crossings. He has never held a congressional appointment.

Read more:

Opinion: I asked AI about myself. The answers were all wrong - The Virginian-Pilot

ICYMI: As California Fires Worsen, Can AI Come to the Rescue … – Office of Governor Gavin Newsom

WHAT YOU NEED TO KNOW: No other jurisdiction in the world comes close to Californias use of technology and innovation including AI to fight fires.

SACRAMENTO Short answer: yes.

California is leveraging technologies like AI to fight fires faster and smarter, saving countless lives and communities from destruction.

As reported by the Los Angeles Times, CAL FIRE recently launched a pilot program that uses AI to monitor live camera feeds and issues alerts if anomalies are detected. Already, the program has successfully alerted CAL FIRE to 77 fires before any 911 calls were made.

This program is made possible by record investments by Governor Newsom and the Legislature in wildfire prevention and response totaling $2.8 billion.

IN CASE YOU MISSED IT:

As California Fires Worsen, Can AI Come to the Rescue?

By Hayley Smith

Los Angeles Times

Just before 3 a.m. one night this month, Scott Slumpff was awakened by the ding of a text message.

An ALERTCalifornia anomaly has been confirmed in your area of interest, the message said.

Slumpff, a battalion chief with the California Department of Forestry and Fire Protection, sprang into action. The message meant the agencys new artificial intelligence system had identified signs of a wildfire with a remote mountaintop camera in San Diego County.

Within minutes, crews were dispatched to the burgeoning blaze on Mount Laguna squelching it before it grew any larger than a 10-foot-by-10-foot spot.

Without the alert, we wouldnt have even known about the fire until the next morning, when people are out and about seeing smoke, Slumpff said. We probably would have been looking at hundreds of acres rather than a small spot.

The rapid response was part of a new AI pilot project operated by Cal Fire in partnership with UC San Diegos ALERTCalifornia system, which maintains 1,039 high-definition cameras in strategic locations throughout the state.

The AI constantly monitors the camera feeds in search of anomalies such as smoke, and alerts Cal Fire when it detects something. A red box highlights the anomaly on a screen, allowing officials to quickly verify and respond.

The project rolled out just two months ago to six Cal Fire emergency command centers in the state. But the proof of concept has already been so successful correctly identifying 77 fires before any 911 calls were logged that it will soon roll out to all 21 centers.

The success of this project is the fires you never hear about, said Phillip SeLegue, staff chief of fire intelligence with Cal Fire.

Read more here.

Read more from the original source:

ICYMI: As California Fires Worsen, Can AI Come to the Rescue ... - Office of Governor Gavin Newsom

AI chips, shared trips, and a shorter work week : The Indicator from … – NPR

AI chips, shared trips, and a shorter work week : The Indicator from Planet Money It's Indicators of the Week, our weekly news roundup. Today, AI doesn't want to invest in AI, a county in Washington state implements a 4-day work week, and NYC says bye bye to Airbnb, sorta.

For sponsor-free episodes of The Indicator from Planet Money, subscribe to Planet Money+ via Apple Podcasts or at plus.npr.org.

Music by Drop Electric. Find us: TikTok, Instagram, Facebook, Newsletter.

Lionel Bonaventure/AFP via Getty Images

It's Indicators of the Week, our weekly news roundup. Today, AI doesn't want to invest in AI, a county in Washington state implements a 4-day work week, and NYC says bye bye to Airbnb, sorta.

For sponsor-free episodes of The Indicator from Planet Money, subscribe to Planet Money+ via Apple Podcasts or at plus.npr.org.

Music by Drop Electric. Find us: TikTok, Instagram, Facebook, Newsletter.

View post:

AI chips, shared trips, and a shorter work week : The Indicator from ... - NPR

How Schools Can Survive A.I. – The New York Times

Last November, when ChatGPT was released, many schools felt as if theyd been hit by an asteroid.

In the middle of an academic year, with no warning, teachers were forced to confront the new, alien-seeming technology, which allowed students to write college-level essays, solve challenging problem sets and ace standardized tests.

Some schools responded unwisely, I argued at the time by banning ChatGPT and tools like it. But those bans didnt work, in part because students could simply use the tools on their phones and home computers. And as the year went on, many of the schools that restricted the use of generative A.I. as the category that includes ChatGPT, Bing, Bard and other tools is called quietly rolled back their bans.

Ahead of this school year, I talked with numerous K-12 teachers, school administrators and university faculty members about their thoughts on A.I. now. There is a lot of confusion and panic, but also a fair bit of curiosity and excitement. Mainly, educators want to know: How do we actually use this stuff to help students learn, rather than just try to catch them cheating?

Im a tech columnist, not a teacher, and I dont have all the answers, especially when it comes to the long-term effects of A.I. on education. But I can offer some basic, short-term advice for schools trying to figure out how to handle generative A.I. this fall.

First, I encourage educators especially in high schools and colleges to assume that 100 percent of their students are using ChatGPT and other generative A.I. tools on every assignment, in every subject, unless theyre being physically supervised inside a school building.

At most schools, this wont be completely true. Some students wont use A.I. because they have moral qualms about it, because its not helpful for their specific assignments, because they lack access to the tools or because theyre afraid of getting caught.

But the assumption that everyone is using A.I. outside class may be closer to the truth than many educators realize. (You have no idea how much were using ChatGPT, read the title of a recent essay by a Columbia undergraduate in The Chronicle of Higher Education.) And its a helpful shortcut for teachers trying to figure out how to adapt their teaching methods. Why would you assign a take-home exam, or an essay on Jane Eyre, if everyone in class except, perhaps, the most strait-laced rule followers will use A.I. to finish it? Why wouldnt you switch to proctored exams, blue-book essays and in-class group work, if you knew that ChatGPT was as ubiquitous as Instagram and Snapchat among your students?

Second, schools should stop relying on A.I. detector programs to catch cheaters. There are dozens of these tools on the market now, all claiming to spot writing that was generated with A.I., and none of them work reliably well. They generate lots of false positives, and can be easily fooled by techniques like paraphrasing. Dont believe me? Ask OpenAI, the maker of ChatGPT, which discontinued its A.I. writing detector this year because of a low rate of accuracy.

Its possible that in the future, A.I. companies may be able to label their models outputs to make them easier to spot a practice known as watermarking or that better A.I. detection tools may emerge. But for now, most A.I. text should be considered undetectable, and schools should spend their time (and technology budgets) elsewhere.

My third piece of advice and the one that may get me the most angry emails from teachers is that teachers should focus less on warning students about the shortcomings of generative A.I. than on figuring out what the technology does well.

Last year, many schools tried to scare students away from using A.I. by telling them that tools like ChatGPT are unreliable, prone to spitting out nonsensical answers and generic-sounding prose. These criticisms, while true of early A.I. chatbots, are less true of todays upgraded models, and clever students are figuring out how to get better results by giving the models more sophisticated prompts.

As a result, students at many schools are racing ahead of their instructors when it comes to understanding what generative A.I. can do, if used correctly. And the warnings about flawed A.I. systems issued last year may ring hollow this year, now that GPT-4 is capable of getting passing grades at Harvard.

Alex Kotran, the chief executive of the AI Education Project, a nonprofit that helps schools adopt A.I., told me that teachers needed to spend time using generative A.I. themselves to appreciate how useful it could be and how quickly it was improving.

For most people, ChatGPT is still a party trick, he said. If you dont really appreciate how profound of a tool this is, youre not going to take all the other steps that are going to be required.

There are resources for educators who want to bone up on A.I. in a hurry. Mr. Kotrans organization has a number of A.I.-focused lesson plans available for teachers, as does the International Society for Technology in Education. Some teachers have also begun assembling recommendations for their peers, such as a website made by faculty at Gettysburg College that provides practical advice on generative A.I. for professors.

In my experience, though, there is no substitute for hands-on experience. So Id advise teachers to start experimenting with ChatGPT and other generative A.I. tools themselves, with the goal of getting as fluent in the technology as many of their students already are.

My last piece of advice for schools that are flummoxed by generative A.I. is this: Treat this year the first full academic year of the post-ChatGPT era as a learning experience, and dont expect to get everything right.

There are many ways A.I. could reshape the classroom. Ethan Mollick, a professor at the University of Pennsylvanias Wharton School, thinks the technology will lead more teachers to adopt a flipped classroom having students learn material outside class and practice it in class which has the advantage of being more resistant to A.I. cheating. Other educators I spoke with said they were experimenting with turning generative A.I. into a classroom collaborator, or a way for students to practice their skills at home with the help of a personalized A.I. tutor.

Some of these experiments wont work. Some will. Thats OK. Were all still adjusting to this strange new technology in our midst, and the occasional stumble is to be expected.

But students need guidance when it comes to generative A.I., and schools that treat it as a passing fad or an enemy to be vanquished will miss an opportunity to help them.

A lot of stuffs going to break, Mr. Mollick said. And so we have to decide what were doing, rather than fighting a retreat against the A.I.

Read the original here:

How Schools Can Survive A.I. - The New York Times