Media Search:



‘Embarrassing’ Court Document Google Wanted to Hide Finally … – Slashdot

America's Department of Justice "has finally posted what judge Amit Mehta described at the Google search antitrust trial as an 'embarrassing' exhibit that Google tried to hide from the public," reports Ars Technica: The document in question contains meeting notes that Google's vice president for finance, Michael Roszak, "created for a course on communications," Bloomberg reported. In his notes, Roszak wrote that Google's search advertising "is one of the world's greatest business models ever created" with economics that only certain "illicit businesses" selling "cigarettes or drugs" "could rival."

At trial, Roszak told the court that he didn't recall if he ever gave the presentation. He said that the course required that he tell students "things I don't believe as part of the presentation." He also claimed that the notes were "full of hyperbole and exaggeration" and did not reflect his true beliefs, "because there was no business purpose associated with it." According to Bloomberg, Google repeatedly objected to the document being shared in court, claiming it was irrelevant to the DOJ's case. Then, after Mehta allowed the DOJ to present the document as evidence, Google tried to seal off Roszak's testimony on the document...

Beyond likening Google's search advertising business to illicit drug markets, Roszak's notes also said that because users got hooked on Google's search engine, Google was able to "mostly ignore the demand side" of "fundamental laws of economics" and "only focus on the supply side of advertisers, ad formats, and sales." This was likely the bit that actually interested the DOJ. "We could essentially tear the economics textbook in half," Roszak's notes said. Part of the DOJ's case argues that because Google has a monopoly over search, it's less incentivized to innovate products that protect consumers from harm like invasive data collection.

A Google spokesman told Bloomberg that Roszak's statements "don't reflect the company's opinion" and "were drafted for a public speaking class in which the instructions were to say something hyperbolic and attention-grabbing." The spokesman also noted that Roszak "testified he didn't believe the statements to be true."

More here:
'Embarrassing' Court Document Google Wanted to Hide Finally ... - Slashdot

H&R Block, Meta, and Google Slapped With RICO Suit, Allegedly … – Slashdot

Anyone who has used H&R Block's tax return preparation services since 2015 "may have unintentionally helped line Meta and Google's pocket," reports Gizmodo: That's according to a new class action lawsuit which alleges the three companies "jointly schemed" to install trackers on the H&R Block site to scan and transmit tax data back to the tech companies which then used elements of the data to engage in targeted advertising.

Attorneys bringing the case forward claim the three companies' conduct amounts to a "pattern of racketeering activity" covered under the Racketeer Influenced and Corrupt Organizations Act (RICO), a tool typically reserved for organized crime. "H&R Block, Google, and Meta ignored data privacy laws, and passed information about people's financial lives around like candy," Brent Wisner, one of the attorneys bringing forward the complaint said.

The lawsuit, filed in the Northern District of California this week, stems from a bombshell Congressional report released earlier this year detailing the way multiple tax preparation firms, including H&R Block, "recklessly" shared the sensitive tax data of tens of millions of Americans without proper safeguards. At issue are the tax preparation firms' use of tracking "pixels" placed on their websites. These trackers, which the lawsuit refers to as "spy cams" would allegedly scan tax documents and reveal a variety of personal tax information, including a filer's name, filing status, federal taxes owed, address, and number of dependents. That data was then anonymized and used for targeted advertising and to train Meta's AI algorithms, the congressional report notes. The attorneys argue that H&R Block, Meta, and Google "explicitly and intentionally" entered into an agreement to violate taxpayers' privacy rights for financial gain, according to the article. The suit seeks refunds and punitive damages.

Read more here:
H&R Block, Meta, and Google Slapped With RICO Suit, Allegedly ... - Slashdot

FBI Indicts Goldman Sachs Analyst Who Tried Using Xbox Chat for … – Slashdot

Kotaku reports: A newly unsealed FBI indictment accuses a former analyst at Goldman Sachs of insider trading, including allegedly using an Xbox to pass tips onto his close friends. The friend group earned over $400,000 in ill-gotten gains as a result, federal prosecutors claim. "There's no tracing [Xbox 360 chat]," the analyst allegedly told his friend who was worried they might be discovered.

He appears to have made a grave miscalculation.

The FBI arrested Anthony Viggiano and alleged co-conspirator Christopher Salamone, charging them with securities fraud on September 28. Viggiano is accused of using his previous position at Goldman Sachs to share trading tips with Salamone and others. Salamone has already pleaded guilty. Bloomberg reports that this is the fifth incident in recent years of a person associated with the investment bank allegedly using their position to do crimes...

Probably best to keep the crime talk on Xbox to a minimum either way, especially now that Microsoft is using AI to monitor communications for illicit and toxic activities. In a statement an FBI official said "This indictment is yet another example of individuals believing they can get away with benefiting from trading on material non-public information.

Continued here:
FBI Indicts Goldman Sachs Analyst Who Tried Using Xbox Chat for ... - Slashdot

Quantum Computers Could Crack Encryption Sooner Than Expected With New Algorithm – Singularity Hub

One of the most well-established and disruptive uses for a future quantum computer is the ability to crack encryption. A new algorithm could significantly lower the barrier to achieving this.

Despite all the hype around quantum computing, there are still significant question marks around what quantum computers will actually be useful for. There are hopes they could accelerate everything from optimization processes to machine learning, but how much easier and faster theyll be remains unclear in many cases.

One thing is pretty certain though: A sufficiently powerful quantum computer could render our leading cryptographic schemes worthless. While the mathematical puzzles underpinning them are virtually unsolvable by classical computers, they would be entirely tractable for a large enough quantum computer. Thats a problem because these schemes secure most of our information online.

The saving grace has been that todays quantum processors are a long way from the kind of scale required. But according to a report in Science, New York University computer scientist Oded Regev has discovered a new algorithm that could reduce the number of qubits required substantially.

The approach essentially reworks one of the most successful quantum algorithms to date. In 1994, Peter Shor at MIT devised a way to work out which prime numbers need to be multiplied together to give a particular numbera problem known as prime factoring.

For large numbers, this is an incredibly difficult problem that quickly becomes intractable on conventional computers, which is why it was used as the basis for the popular RSA encryption scheme. But by taking advantage of quantum phenomena like superposition and entanglement, Shors algorithm can solve these problems even for incredibly large numbers.

That fact has led to no small amount of panic among security experts, not least because hackers and spies can hoover up encrypted data today and then simply wait for the development of sufficiently powerful quantum computers to crack it. And although post-quantum encryption standards have been developed, implementing them across the web could take many years.

It is likely to be quite a long wait though. Most implementations of RSA rely on at least 2048-bit keys, which is equivalent to a number 617 digits long. Fujitsu researchers recently calculated that it would take a completely fault-tolerant quantum computer with 10,000 qubits 104 days to crack a number that large.

However, Regevs new algorithm, described in a pre-print published on arXiv, could potentially reduce those requirements substantially. Regev has essentially reworked Shors algorithm such that its possible to find a numbers prime factors using far fewer logical steps. Carrying out operations in a quantum computer involves creating small circuits from a few qubits, known as gates, that perform simple logical operations.

In Shors original algorithm, the number of gates required to factor a number is the square of the number of bits used to represent it, which is denoted as n2. Regevs approach would only require n1.5 gates because it searches for prime factors by carrying out smaller multiplications of many numbers rather than very large multiplications of a single number. It also reduces the number of gates required by using a classical algorithm to further process the outputs.

In the paper, Regev estimates that for a 2048-bit number this could reduce the number of gates required by two to three orders of magnitude. If true, that could enable much smaller quantum computers to crack RSA encryption.

However, there are practical limitations. For a start, Regev notes that Shors algorithm benefits from a host of optimizations developed over the years that reduce the number of qubits required to run it. Its unclear yet whether these optimizations would work on the new approach.

Martin Eker, a quantum computing researcher with the Swedish government, also told Science that Regevs algorithm appears to need quantum memory to store intermediate values. Providing that memory will require extra qubits and eat into any computational advantage it has.

Nonetheless, the new research is a timely reminder that, when it comes to quantum computings threat to encryption, the goal posts are constantly moving, and shifting to post-quantum schemes cant happen fast enough.

Image Credit: Google

Read this article:
Quantum Computers Could Crack Encryption Sooner Than Expected With New Algorithm - Singularity Hub

MIT’s Superconducting Qubit Breakthrough Boosts Quantum Performance – Tom’s Hardware

Science (like us) isn't always sure of where the best possible future is, and computing is no exception. Whether in classic semiconductor systems or in the forward-looking reality of quantum computing, there are sometimes multiple paths forward (and here's our primer on quantum computing if you want a refresher). Transmon superconducting qubits (such as the ones used by IBM, Google, and Alice&Bob) have gained traction as one of the most promising qubit types. But new MIT research could open up a door towards another type of superconducting qubits that are more stable and could offer more complex computation circuits: fluxonium qubits.

Qubits are the quantum computing equivalent to transistors - get increasing numbers of them together, and you get increased computing performance (in theory). But while transistors are deterministic and can only represent a binary system (think of the result being either side of a coin, mapped to either 0 or 1), qubits are probabilistic and can represent the different positions of the coin while it's spinning in the air. This allows you to explore a bigger space of possible solutions than what can easily be represented through binary languages (which is why quantum computing can offer much faster processing of certain problems).

One current limitation to quantum computing is the accuracy of the computed results - if you're looking for, say, new healthcare drug designs, it'd be an understatement to say you need the results to be correct, replicable, and demonstrable. But qubits are sensitive and finicky to external stressors such as temperature, magnetism, vibrations, fundamental particle collisions, and other elements, which can introduce errors into the computation or collapse entangled states entirely. The reality of qubits being much more prone to external interference than transistors is one of the roadblocks on the road to quantum advantage; so a solution lies in being able to improve the accuracy of the computed results.

It's also not just a matter of applying error-correcting code to low-accuracy results and magically turning them into the correct results we want. IBM's recent breakthrough in this area (applying to transmon qubits) showed the effects of an error-correction code that predicted the environmental interference within a qubit system. Being able to predict interference means you can account for its effects within the skewed results and can compensate for them accordingly - arriving at the desired ground truth.

But in order for it to be possible to apply error-correction codes, the system has to already have passed a "fidelity threshold" - a minimum operating-level accuracy that enables those error-correcting codes to be just enough for us to be able to extract predictably useful, accurate results from our quantum computer.

Some qubit architectures - such as fluxonium qubits, the qubit architecture the research is based on - possess higher base stability against external interference. This enables them to stay coherent for longer periods of time - a measure of how long the qubit system can be effectively used between shut-downs and total information loss. Researchers are interested in fluxonium qubits because they've already unlocked coherence times of more than a millisecond - around ten times longer than can be achieved with transmon superconducting qubits.

The novel qubit architecture enables operations to be performed between fluxonium qubits with important accuracy levels. Within it, the research team enabled fluxonium-based two-qubit gates to run at 99.9% accuracy and single-qubit gates to run at a record 99.99% accuracy. The architecture and design were published under the title "High-Fidelity, Frequency-Flexible Two-Qubit Fluxonium Gates with a Transmon Coupler" in PHYSICAL REVIEW X.

You could think about fluxonium qubits as being an alternative qubit architecture with its own strengths and weaknesses; not as an evolution of the quantum computing that has come before. Transmon qubits are made of a single Josephson junction shunted by a large capacitor, while fluxonium qubits are made of a small Josephson junction in series with an array of larger junctions or a high kinetic inductance material. It's partly for this that fluxonium qubits are harder to scale: they require more sophisticated coupling schemes between qubits, sometimes even using transmon qubits for this purpose. The fluxonium architecture design described in the paper does just that in what's called a Fluxonium-Transmon-Fluxonium (FTF) architecture.

Transmon qubits such as the ones used by IBM and Google are relatively easier to manipulate into bigger qubits arrays (IBM's Osprey is already at 433 qubits) and have faster operation times, performing fast and simple gate operations mediated by microwave pulses. Fluxonium qubits do offer the possibility of performing slower yet more accurate gate operations through shaped pulses than a transmon-only approach would enable.

There's no promise of an easy road to quantum advantage through any qubit architecture; that's the reason why so many companies are pursuing their differing approaches. In this scenario, it may be useful to think about this Noisy-Intermediate Scale Quantum (NISQ) era being the age where multiple quantum architectures flourish. From topological superconductors (as per Microsoft) through diamond vacancies, transmon superconduction (IBM, Google, others), ion traps, and a myriad of other approaches, this is the age where we will settle into certain patterns within quantum computing. All architectures may flourish, but it's perhaps most likely that only some will - which also justifies why states and corporations aren't pursuing a single qubit architecture as their main focus.

The numerous, apparently viable approaches to quantum computing we're witnessing put us right in the middle of the branching path before x86 gained dominance as the premier architecture for binary computing. It remains to be seen whether the quantum computing future will readily (and peacefully) agree on a particular technology, and how will a heterogeneous quantum future look like.

See the original post:
MIT's Superconducting Qubit Breakthrough Boosts Quantum Performance - Tom's Hardware