Media Search:



Commentary: After the insurrection, America’s far-right groups get more extreme – pressherald.com

As the U.S. grapples with domestic extremism in the wake of the Jan. 6 insurrection at the U.S. Capitol, warnings about more violence are coming from the FBI Director Chris Wray and others. The Conversation asked Matthew Valasik, a sociologist at Louisiana State University, and Shannon E. Reid, a criminologist at the University of North Carolina Charlotte, to explain what right-wing extremist groups in the U.S. are doing. The scholars are co-authors of Alt-Right Gangs: A Hazy Shade of White, published in September 2020; they track the activities of far-right groups like the Proud Boys.What are U.S. extremist groups doing since the Jan. 6 riot?Local chapters of the Proud Boys, Oath Keepers, Groypers and others are breaking away from their groups national figureheads. For instance, some local Proud Boys chapters have been explicitly cutting ties with national leader Enrique Tarrio, the groups chairman.Tarrio was arrested on federal weapons charges in the days before the insurrection, but he has also been revealed as a longtime FBI informant. He reportedly aided authorities in a variety of criminal cases, including those involving drug sales, gambling and human smuggling though he has not yet been connected with cases against Proud Boys members.When a leader of a far-right group or street gang leaves, regardless of the reason, it is common for a struggle to emerge among remaining members who seek to consolidate power. That can result in violence spilling over into the community as groups attempt to reshape themselves.While some of the splinter Proud Boys chapters will likely maintain the Proud Boys brand, at least for the time being, others may evolve and become more radicalized. The Base, a neo-Nazi terror group, has recruited from among the ranks of Proud Boys. As the Proud Boys sheds affiliates, it would not be surprising for those with more enthusiasm about hateful activism to seek out more extreme groups. Less committed groups will wither away.How does that response compare with what happened after 2017s Unite the Right rally in Charlottesville?Neither the Capitol insurrection nor the Charlottesville rally produced the response from mainstream America that far-right groups had hoped for. Rather than rising up in a groundswell of support, most Americans were appalled some so much that they have abandoned the Republican Party.Additionally, right-wingers have been hit hard by the post-insurrection actions by large technology companies like Facebook, Twitter, Apple, Google and Amazon. They took down far-right group members accounts and removed right-wing social media platforms, including permanently blacklisting Donald Trumps Twitter account and temporarily blocking all traffic to Parler, a conservative social media platform. Those steps are more significant than earlier moderation and algorithm changes those companies had undertaken in previous efforts to curb online extremism.Another major difference is the lack of regret. Nobody on the right wanted to be associated with Charlottesville after it happened. Figureheads of the far right who had initially promoted that rally saw the negative public reaction and distanced themselves, even condemning the Unite the Right rally.After the insurrection at the Capitol, their response was different. They did not split and blame other right-wing groups. Instead, conservative and extreme-right circles have united behind a false claim that they did nothing wrong, and alleged, despite all the evidence to the contrary, that left-wing activists assaulted the Capitol while disguised as right-wingers.Are extremist groups attracting new members?Some members have left extremist groups in the wake of the Jan. 6 violence. The members who remain, and the new members they are attracting, are increasing the radicalization of far-right groups. As the less committed members abandon these far-right groups, only the more devout remain. Such a shift is going to alter the subculture of these groups, driving them farther to the right. We expect this polarization will only accelerate the reactionary behaviors and extremist tendencies of these far-right groups.Right-wing pundits and conservative media are continuing to stoke fears about the Biden administration. We and other observers of right-wing groups expect that extremists will come to see the events of Jan. 6 as just the opening skirmish in a modern civil war. We anticipate they will continue to seek an end to American democracy and the beginning of a new society free or even purged of groups the right wing fears, including immigrants, Jewish people, nonwhites, LGBTQ people and those who value multiculturalism.We expect that these groups will continue to shift more and more to the extreme right, posing risks for acts of violence both large and small.Have far-right extremists views toward the police changed?With a Democratic administration and attorney general, the far right will no longer view federal law enforcement agencies as friendly, the way they did under the Trump administration. Rather, they view the police as the enemy.Even before Joe Biden took office and the Republicans officially lost control of the U.S. Senate, the Capitol riot showed this divide between right-wing extremists and police. A Capitol Police officer was assaulted with a flagpole bearing an American flag, and some members of the mob were police officers and military personnel. Many more were military veterans.Its not clear what this different view of law enforcement means for police officers, active-duty military and veterans who are members of right-wing groups. But we anticipate that only those who are most zealously committed to far-right causes will remain active. That, in turn, will push those groups even farther to the extreme right.Has anything changed for militias since Biden has become president?In 2009, the Department of Homeland Security issued a report warning about the growing membership in far-right groups, including their active recruitment of military veterans. Shortly after the report was released, Republicans in Congress pushed for the report to be retracted and for dramatically reducing the federal effort to monitor far-right groups in the U.S. This permissive atmosphere allowed far-right groups to grow and spread nationwide.The Trump administration further served far-right groups by failing to pay out federal grants for grassroots counterviolence programs, by refusing to help local law enforcement agencies with equipment or training to deal with these groups, and by routinely downplaying the violence perpetrated by these white power groups. Essentially, far-right groups were unpoliced for the past decade or more.But that approach has ended. Merrick Garlands appointment as Bidens attorney general is a big signal: In his career at the Department of Justice before becoming a federal judge, Garland supervised the investigations of the 1995 Oklahoma City bombing and the 1996 Atlanta Olympics bombing.These were two of the most noteworthy acts of far-right domestic terrorism in the nations history. Garland has said that he will make fighting right-wing violence and attacks on democracy major priorities of his tenure at the head of the Justice Department.In January, Canada designated the Proud Boys and other right-wing groups as terrorist organizations, which puts pressure on U.S. law enforcement to reconsider how they evaluate, investigate and prosecute these extremist groups. Beyond law enforcements treating these far-right groups like street gangs, there are also laws in place to combat violence associated with domestic terrorism.It appears that U.S. prosecutors may finally begin to take seriously the violent actions of Proud Boys, especially as more and more members are being charged with coordinating the breach of the U.S. Capitol Building.But as police power comes to bear on these violent right-wing groups, many of their members remain at least as radicalized as they were on Jan. 6 if not more so. Some may feel that more extreme measures are needed to resist the Biden administration.The Conversation is an independent and nonprofit source of news, analysis and commentary from academic experts.

Previous

Next

Latest Articles

Times Record

The Forecaster

The Forecaster

Times Record

The Forecaster

Continue reading here:
Commentary: After the insurrection, America's far-right groups get more extreme - pressherald.com

Pepe The Frog | Derp Cat Wiki | Fandom

*Pepe The Frog is a meme frog. He is the mortal form of Lord Kek and the leader of Kekistan. PepeAllies

The Kekistanti people

Enemies of Kekistan

Upon the rise of his followers to form a nation, Lord Kek decided he should directly influence his followers, help them grow and flourish. As such, he took on a mortal form. A frog known as Pepe, the undying King of Kekistan. He led his people through numerous battles and wars, helping them through hardship with his divine powers. However, Pepe was an unstable entity. Like the Kekistani bordered the line of dank and cringe, Pepe bordered Meme God and Devil Entity. Pepe/Kek remains sane for the moment, though those who know his secret are wary he could snap. Regardless, Pepe is an influential meme leader and not to be underestimated. He is Kekistan's representative at the United Memes, and seems to truly care for his people. Crazed God or not, there is some aspect of Pepe that can be....respected.

During the Great Mongo-Kekistan War, Pepe led a massive force to attempt to retake parts of the Mongoose Empire, but was beaten back by the combined forces of Megarton and High God King Overlord Sashank. Since then, he has not ventured out of his lair, except to attend meetings of the United Memes or meet with his generals.

Since you found this

Link:
Pepe The Frog | Derp Cat Wiki | Fandom

NFT goldrush: A roundup of the strangest nonfungible tokens – CNET

A .gif of Nyan Cat sold for lots and lots (and lots) of money as an NFT.

NFTs have temporarily taken the reins from cryptocurrency as the strangest online trend. Nonfungible tokens have become a sensation, or scandal, thanks to the headline-grabbing insanity of it all: Memes being sold for the cost of a Tesla, tweets fetching seven-figure bids and digital art selling for $69 million.

A quick catchup: Nonfungible assets are those that aren't interchangeable with one another. Every $100 bill holds the same value as any other $100 bill, therefore they are fungible. Houses, cars and collectables are nonfungible: Houses of the same size on the same street will sell for different prices, and the same model of the same car can similarly vary in cost.

Subscribe to CNET Now for the day's most interesting reviews, news stories and videos.

Which takes us to nonfungible tokens. They're essentially certifications of ownership recorded on a blockchain. Nonfungible tokens put the ownership of a digital product -- be it digital art, a video clip or even just a jpeg or gif -- on that ledger. In the age of NFTs, downloading a picture is like owning a print. Having the NFT is like owning the original painting.

Real digital artists are making real money on NFTs. Take Beeple. He's a digital artist with a huge fanbase, over 1.8 million followers on Instagram. Art he sold as an NFT recently fetched $69 million in a Christie's auction. That's insane to you or me, but not to people who frequent Christie's auctions, who spend $60 million on abstract expressionist paintings.

But even if there is a small percentage of NFT sales you can make sense of, there are many more which are absolutely, positively nuts.

For example...

When COVID-19 lockdown began last March, Brooklyn filmmaker Alex Ramrez-Mallis and four friends did the obvious thing: Started sending audio recordings of their farts to one another through a WhatsApp group chat. One year later, Ramrez-Mallis is auctioning 52 minutes of audio flatulence as an NFT.

The auction's starting price: $85. Would you pay $85 for farts? Would be a solid investment if you did, since someone out there was ultimately willing to pay 0.24 ethereum, or about $420, for the NFT. What's more, in addition to selling the 52-minute recording, he's also selling NFTs for individual farts. Only one has been sold: Fart #420, for about $90.

"If people are selling digital art and GIFs, why not sell farts?" Ramrez-Mallistold the New York Post. Truer words, never spoken.

Bad Luck Brian.

Remember Bad Luck Brian? It was a meme popularized in 2012, when a yearbook photo of high school student Kyle Craven, depicting him with braces and a plaid sweater, was posted to Reddit. People would post the picture with macro captions of unfortunate events, like "Escapes burning building. Gets hit by firetruck." (Most of the good ones are too NSFW for me to post here.)

Kyle Craven has had the last laugh, though, selling the yearbook photo as an NFT for $36,000. It's kind of a beautiful underdog story for the digital age. Kind of.

This art was sold as an NFT in $38,000 in 2018 and flipped three years later for $320,000.

This one is dumb, but also is an illustrative example of why people are buying NFTs: to sell them for more later on.

The above piece of art is like a Pokemon card for a hell-creature merge of Homer Simpson and Pepe the frog. Homer Simpson is, well, Homer Simpson, and Pepe is an internet frog that's popular on 4chan and other areas of the internet. The NFT for this art recently sold for $320,000.

The crazy part? The person who sold it wasn't its creator.He bought it back in 2018 for $38,000. So as preposterous as all of this NFT business is, it's worth noting that some people are actually making a lot of money flipping them.

Now we get into the stupid money.

Nyan Cat was a YouTube sensation nearly 10 years ago. It was a video of a pixelated cat with a Pop-Tart for a torso, along with the tune of a Japanese pop song. It has over 185 million views on YouTube, and has become a ubiquitous gif in the years since.

"The design of Nyan Cat was inspired by my cat Marty, who crossed the Rainbow Bridge but lives on in spirit," wrote its creator on the sales page for the NFT of Nyan Cat. It would end up selling for 300 ethereum -- $531,000.

"Just setting up my twtter," tweeted Jack Dorsey, co-founder and CEO of Twitter, back in 2006. Turns out that each of those words is worth over $625,000, as the NFT for that tweet is currently at auction, with the top bid sitting at $2.5 million.

Dorsey has said the proceeds will be turned to Bitcoin and donated to GiveDirectly, a charity that helps six African countries with COVID-19 relief.

The philanthropy is nice -- not to be understated, since it'll likely save thousands of lives -- but there's also some clever marketing at play here. NFTs are closely related to cryptocurrency, since both are based on blockchain, to the point where NFTs are almost always bought with Ethereum, the second biggest currency after Bitcoin. So if you're a big investor in cryptocurrency, like Dorsey is, inflating the NFT bubble isn't a bad way to help your cryptoholdings appreciate.

Which is why it's not surprising to see Tesla CEO Elon Musk tweet about NFTs, and tease selling one in the future.

But despite the philanthropy, the guerrilla marketing and the distinct possibility that the buyer will be able to flip the tweet for $10 million in a few years, dropping $2.5 million on a tweet is a sign we've entered a new era of internet insanity.

See also: NFTs explained: These pricey tokens are as weird as you think they are

Now playing: Watch this: Tesla invests $1.5B in Bitcoin, E3 to go digital

1:22

Read the original post:
NFT goldrush: A roundup of the strangest nonfungible tokens - CNET

Google’s AlphaGo computer beats human champ Lee Sedol in …

SEOUL, South Korea -- Game not over? Human Go champion Lee Sedol says Google's Go-playing program AlphaGo is not yet superior to humans, despite its 4:1 victory in a match that ended Tuesday.

The week-long showdown between the South Korean Go grandmaster and Google DeepMind's artificial intelligence program showed the computer software has mastered a major challenge for artificial intelligence.

"I don't necessarily think AlphaGo is superior to me. I believe that there is still more a human being could do to play against artificial intelligence," Lee said after the nearly five-hour-long final game.

AlphaGo had the upper hand in terms of its lack of vulnerability to emotion and fatigue, two crucial aspects in the intense brain game.

"When it comes to psychological factors and strong concentration power, humans cannot be a match," Lee said.

But he added, "I don't think my defeat this time is a loss for humanity. It clearly shows my weaknesses, but not the weakness of all humanity."

He expressed deep regret for the loss and thanked his fans for their support, saying he enjoyed all five matches.

Lee, 33, has made his living playing Go since he was 12 and is famous in South Korea even among people who do not play the game. The entire country was rooting for him to win.

The series was one of the most intensely watched events in the past week across Asia. The human-versus-machine battle hogged headlines, eclipsing reports of North Korean threats of a pre-emptive strike on the South.

The final game was too close to call until the very end. Experts said it was the best of the five games in that Lee was in top form and AlphaGo made few mistakes. Lee resigned about five hours into the game.

The final match was broadcast live on three major TV networks in South Korea and on big TV screens in downtown Seoul.

Google estimated that 60 million people in China, where Go is a popular pastime, watched the first match on Wednesday.

Before AlphaGo's victory, the ancient Chinese board game was seen as too complex for computers to master. Go fans across Asia were astonished when Lee, one of the world's best Go players, lost the first three matches.

Lee's win over AlphaGo in the fourth match, on Sunday, showed the machine was not infallible: Afterward, Lee said AlphaGo's handling of surprise moves was weak. The program also played less well with a black stone, which plays first and has to claim a larger territory than its opponent to win.

Choosing not to exploit that weakness, Lee opted for a black stone in the last match.

Go players take turns placing the black and white stones on 361 grid intersections on a nearly square board. Stones can be captured when they are surrounded by those of their opponent.

To take control of territory, players surround vacant areas with their stones. The game continues until both sides agree there are no more places to put stones, or until one side decides to quit.

Google officials say the company wants to apply technologies used in AlphaGo in other areas, such as smartphone assistants, and ultimately to help scientists solve real-world problems.

As for Go, other top players are bracing themselves.

Chinese world Go champion Ke Jie said it was just a matter of before top Go players like himself would be overtaken by artificial intelligence.

"It is very hard for Go players at my level to improve even a little bit, whereas AlphaGo has hundreds of computers to help it improve and can play hundreds of practice matches a day," Ke said.

"It does not seem like a good thing for we professional Go players, but the match played a very good role in promoting Go," Ke said.

Go here to see the original:
Google's AlphaGo computer beats human champ Lee Sedol in ...

The Pastry A.I. That Learned to Fight Cancer – The New Yorker

One morning in the spring of 2019, I entered a pastry shop in the Ueno train station, in Tokyo. The shop worked cafeteria-style. After taking a tray and tongs at the front, you browsed, plucking what you liked from heaps of baked goods. What first struck me was the selection, which seemed endless: there were croissants, turnovers, Danishes, pies, cakes, and open-faced sandwiches piled up everywhere, sometimes in dozens of varieties. But I was most surprised when I got to the register. At the urging of an attendant, I slid my items onto a glowing rectangle on the counter. A nearby screen displayed an image, shot from above, of my doughnuts and Danish. I watched as a set of jagged, neon-green squiggles appeared around each item, accompanied by its name in Japanese and a price. The system had apparently recognized my pastries by sight. It calculated what I owed, and I paid.

I tried to gather myself while the attendant wrapped and bagged my items. I was still stunned when I got outside. The bakery system had the flavor of magica feat seemingly beyond the possible, made to look inevitable. I had often imagined that, someday, Id be able to point my smartphone camera at a peculiar flower and have it identified, or at a chess board, to study the position. Eventually, the tech would get to the point where one could do such things routinely. Now it appeared that we were in this world already, and that the frontier was pastry.

Computers learned to see only recently. For decades, image recognition was one of the grand challenges in artificial intelligence. As I write this, I can look up at my shelves: they contain books, and a skein of yarn, and a tangled cable, all inside a cabinet whose glass enclosure is reflecting leaves in the trees outside my window. I cant help but parse this sceneabout a third of the neurons in my cerebral cortex are implicated in processing visual information. But, to a computer, its a mess of color and brightness and shadow. A computer has never untangled a cable, doesnt get that glass is reflective, doesnt know that trees sway in the wind. A.I. researchers used to think that, without some kind of model of how the world worked and all that was in it, a computer might never be able to distinguish the parts of complex scenes. The field of computer vision was a zoo of algorithms that made do in the meantime. The prospect of seeing like a human was a distant dream.

All this changed in 2012, when Alex Krizhevsky, a graduate student in computer science, released AlexNet, a program that approached image recognition using a technique called deep learning. AlexNet was a neural network, deep because its simulated neurons were arranged in many layers. As the network was shown new images, it guessed what was in them; inevitably, it was wrong, but after each guess it was made to adjust the connections between its layers of neurons, until it learned to output a label matching the one that researchers provided. (Eventually, the interior layers of such networks can come to resemble the human visual cortex: early layers detect simple features, like edges, while later layers perform more complex tasks, such as picking out shapes.) Deep learning had been around for years, but was thought impractical. AlexNet showed that the technique could be used to solve real-world problems, while still running quickly on cheap computers. Today, virtually every A.I. system youve heard ofSiri, AlphaGo, Google Translatedepends on the technique.

The drawback of deep learning is that it requires large amounts of specialized data. A deep-learning system for recognizing faces might have to be trained on tens of thousands of portraits, and it wont recognize a dress unless its also been shown thousands of dresses. Deep-learning researchers, therefore, have learned to collect and label data on an industrial scale. In recent years, weve all joined in the effort: todays facial recognition is particularly good because people tag themselves in pictures that they upload to social networks. Google asks users to label objects that its A.I.s are still learning to identify: thats what youre doing when you take those Are you a bot? tests, in which you select all the squares containing bridges, crosswalks, or streetlights. Even so, there are blind spots. Self-driving cars have been known to struggle with unusual signage, such as the blue stop signs found in Hawaii, or signs obscured by dirt or trees. In 2017, a group of computer scientists at the University of California, Berkeley, pointed out that, on the Internet, almost all the images tagged as bedrooms are clearly staged and depict a made bed from 2-3 meters away. As a result, networks have trouble recognizing real bedrooms.

Its possible to fill in these blind spots through focussed effort. A few years ago, I interviewed for a job at a company that was using deep learning to read X-rays, starting with bone fractures. The programmers asked surgeons and radiologists from some of the best hospitals in the U.S. to label a library of images. (The job I interviewed for wouldnt have involved the deep-learning system; instead, Id help improve the Microsoft Paint-like program that the doctors used for labelling.) In Tokyo, outside the bakery, I wondered whether the pastry recognizer could possibly be relying on a similar effort. But it was hard to imagine a team of bakers assiduously photographing and labelling each batch as it came out of the oven, tens of thousands of times, for all the varieties on offer. My partner suggested that the bakery might be working with templates, such that every pain au chocolat would have precisely the same shape. An alternative suggested by the machines retro graphicsbut perplexing, given the systems uncanny performancewas that it wasnt using deep learning. Maybe someone had gone down the old road of computer vision. Maybe, by really considering what pastry looked like, they had taught their software to see it.

Hisashi Kambe, the man behind the pastry A.I., grew up in Nishiwaki City, a small town that sits at Japans geographic center. The city calls itself Japans navel; surrounded by mountains and rice fields, its best known for airy, yarn-dyed cotton fabrics woven in intricate patterns, which have been made there since the eighteenth century. As a teen-ager, Kambe planned to take over his fathers lumber business, which supplied wood to homes built in the traditional style. But he went to college in Tokyo and, after graduating, in 1974, took a job in Osaka at Matsushita Electric Works, which later became Panasonic. There, he managed the companys relationship with I.B.M. Finding himself in over his head, he took computer classes at night and fell in love with the machines.

In his late twenties, Kambe came home to Nishiwaki, splitting his time between the lumber mill and a local job-training center, where he taught computer classes. Interest in computers was soaring, and he spent more and more time at the school; meanwhile, more houses in the area were being built in a Western style, and traditional carpentry was in decline. Kambe decided to forego the family business. Instead, in 1982, he started a small software company. In taking on projects, he followed his own curiosity. In 1983, he began working with NHK, one of Japans largest broadcasters. Kambe, his wife, and two other programmers developed a graphics system for displaying the score during baseball games and exchange rates on the nightly news. In 1984, Kambe took on a problem of special significance in Nishiwaki. Textiles were often woven on looms controlled by planning programs; the programs, written on printed cards, looked like sheet music. A small mistake on a planning card could produce fabric with a wildly incorrect pattern. So Kambe developed SUPER TEX-SIM, a program that allowed textile manufacturers to simulate the design process, with interactive yarn and color editors. It sold poorly until 1985, a series of breaks led to a distribution deal with Mitsubishis fabric division. Kambe formally incorporated as BRAIN Co., Ltd.

For twenty years, BRAIN took on projects that revolved, in various ways, around seeing. The company made a system for rendering kanji characters on personal computers, a tool that helped engineers design bridges, systems for onscreen graphics, and more textile simulators. Then, in 2007, BRAIN was approached by a restaurant chain that had decided to spin off a line of bakeries. Bread had always been an import in Japanthe Japanese word for it, pan, comes from Portugueseand the countrys rich history of trade had left consumers with ecumenical tastes. Unlike French boulangeries, which might stake their reputations on a handful of staples, its bakeries emphasized range. (In Japan, even Kit Kats come in more than three hundred flavors, including yogurt sake and cheesecake.) New kinds of baked goods were being invented all the time: the carbonara, for instance, takes the Italian pasta dish and turns it into a kind of breakfast sandwich, with a piece of bacon, slathered in egg, cheese, and pepper, baked open-faced atop a roll; the ham corn pulls a similar trick, but uses a mixture of corn and mayo for its topping. Every kind of baked good was an opportunity for innovation.

Analysts at the new bakery venture conducted market research. They found that a bakery sold more the more varieties it offered; a bakery offering a hundred items sold almost twice as much as one selling thirty. They also discovered that naked pastries, sitting in open baskets, sold three times as well as pastries that were individually wrapped, because they appeared fresher. These two facts conspired to create a crisis: with hundreds of pastry types, but no wrappersand, therefore, no bar codesnew cashiers had to spend months memorizing what each variety looked like, and its price. The checkout process was difficult and error-pronethe cashier would fumble at the register, handling each item individuallyand also unsanitary and slow. Lines in pastry shops grew longer and longer. The restaurant chain turned to BRAIN for help. Could they automate the checkout process?

AlexNet was five years in the future; even if Kambe and his team could have photographed thousands of pastries, they couldnt have pulled a neural network off the shelf. Instead, the state of the art in computer vision involved piecing together a pipeline of algorithms, each charged with a specific task. Suppose that you wanted to build a pedestrian-recognition system. Youd start with an algorithm that massaged the brightness and colors in your image, so that you werent stymied by someones red shirt. Next, you might add algorithms that identified regions of interest, perhaps by noticing the zebra pattern of a crosswalk. Only then could you begin analyzing image featurespatterns of gradients and contrasts that could help you pick out the distinctive curve of someones shoulders, or the A made by a torso and legs. At each stage, you could choose from dozens if not hundreds of algorithms, and ways of combining them.

For the BRAIN team, progress was hard-won. They started by trying to get the cleanest picture possible. A document outlining the companys early R. & D. efforts contains a triptych of pastries: a carbonara sandwich, a ham corn, and a minced potato. This trio of lookalikes was one of the systems early nemeses: As you see, the text below the photograph reads, the bread is basically brown and round. The engineers confronted two categories of problem. The first they called similarity among different kinds: a bacon pain dpi, for instancea sort of braided baguette with bacon insidehas a complicated knotted structure that makes it easy to mistake for sweet-potato bread. The second was difference among same kinds: even a croissant came in many shapes and sizes, depending on how you baked it; a cream doughnut didnt look the same once its powdered sugar had melted.

In 2008, the financial crisis dried up BRAINs other business. Kambe was alarmed to realize that he had bet his company, which was having to make layoffs, on the pastry project. The situation lent the team a kind of maniacal focus. The company developed ten BakeryScan prototypes in two years, with new image preprocessors and classifiers. They tried out different cameras and light bulbs. By combining and rewriting numberless algorithms, they managed to build a system with ninety-eight per cent accuracy across fifty varieties of bread. (At the office, they were nothing if not well fed.) But this was all under carefully controlled conditions. In a real bakery, the lighting changes constantly, and BRAINs software had to work no matter the season or the time of day. Items would often be placed on the device haphazardly: two pastries that touched looked like one big pastry. A subsystem was developed to handle this scenario. Another subsystem, called Magnet, was made to address the opposite problem of a pastry that had been accidentally ripped apart.

Read more from the original source:
The Pastry A.I. That Learned to Fight Cancer - The New Yorker