Media Search:



Terra Classic Project Burns 2 Billion LUNC Tokens, Overtaking … – CoinGape

Terra Classic new project DFLunc, gaining popularity for its massive LUNC burn mechanism, has burned almost 2 billion LUNC tokens in two weeks. The DeFi protocol was launched in April to deflate LUNC circulating supply more rapidly by burning billions of tokens. With the massive LUNC burn by the protocol, the total LUNC burn surpassed 57.8 billion.

Binance burned 1.27 billion LUNC as part of its monthly LUNC burn mechanism on May 1. Until now, Binance has burned 31.83 billion LUNC tokens.

DFLunc on May 12 shared on Twitter that it has burned over 1.6 billion LUNC, overtaking Binances 1.27 LUNC burn. The Terra Classic community burned almost 2 billion LUNC through DFLunc Protocol as it begins to attract attention. The DeFi protocol consists of multiple smart contracts to deflate LUNC supply through a continuous burn mechanism.

DFLunc Protocol is also a validator for Terra Classic that allows users to mint its DFC token only by burning LUNC tokens. It utilizes two smart contracts based on CosmWasm DFLunc and CW20-DFC. Users burn LUNC by paying USTC as protocol fees to mint DFC tokens. Staking more through the validator burns more LUNC by the DeFi protocol.

The protocol has divided its plan into different stages that ultimately aimed toward the growth of its validator on the Terra Classic chain.

As per the transactions seen by CoinGape Media, the protocol is still burning LUNC through its contact address. The total burn by the community has now reached 57.8 billion LUNC tokens.

Also Read: Bitcoin (BTC) Price At Inflection Point, Big Move Happening In Cardano (ADA)

Terra Classic core developerJoint L1 Task Force (L1TF)prepares for v2.0.1 Upgrade as the community successfully passed Proposal 11511. The Terra Classic blockchain will halt at block 12,812,900, estimated on May 17 at 17:11 UTC. It is followed by the Cosmwasm 1.1.0 Parity upgrade on May 31.

As GoinGape earlier reported, the upgrade includes several critical features such as a minimum initial deposit for governance proposals that will prevent spam, upgraded Cosmos SDK and Tendermint, and enhanced code maintainability.

LUNC price jumped 1% in the last 24 hours, with the price currently trading at $0.000090. The 24-hour low and high are $0.000088 and $0.000091, respectively. Furthermore, the trading volume has increased significantly in the last 24 hours, indicating a rise in interest among traders.

Also Read: Binance Adds New PEPE, SUI Margin Pairs; Popular Analyst Predicts Another Rally

Varinder has 10 years of experience in the Fintech sector, with over 5 years dedicated to blockchain, crypto, and Web3 developments. Being a technology enthusiast and analytical thinker, he has shared his knowledge of disruptive technologies in over 5000+ news, articles, and papers. With CoinGape Media, Varinder believes in the huge potential of these innovative future technologies. He is currently covering all the latest updates and developments in the crypto industry.

The presented content may include the personal opinion of the author and is subject to market condition. Do your market research before investing in cryptocurrencies. The author or the publication does not hold any responsibility for your personal financial loss.

More:

Terra Classic Project Burns 2 Billion LUNC Tokens, Overtaking ... - CoinGape

Harsh AI judgements: The impact of training data – Innovation Origins

MIT researchers have discovered that machine-learning models mimicking human decision-making often make harsher judgements than humans, due to being trained on the wrong data. Models should be trained on normative data (labelled by humans for rule defiance), but are typically trained using descriptive data (factual features labelled by humans), leading to over-prediction of rule violations. This inaccuracy can have serious real-world consequences, such as stricter judgements in bail or sentencing decisions. The study highlights the importance of matching training context to deployment context for rule violation detection models and suggests that dataset transparency and transfer learning could help mitigate the problem.

A separate study involving 6,000 US adults examined views on AI judges, revealing that while AI judges were perceived as less fair than human judges, the gap could be partially offset by increasing the AI judges interpretability and ability to provide a hearing. Human judges received an average procedural fairness score of 4.4 on a 7-point scale, while AI judges scored slightly below 4. However, when an AI-led proceeding offered a hearing and rendered interpretable decisions, it was seen as fair as a human-led proceeding without a hearing and uninterpretable decisions.

Who should define the Ethics of Artificial Intelligence?

Ethics for AI is a controversial topic, to say the least. Who should define its code? Even more.

As AI tools like ChatGPT demonstrate higher accuracy in certain domains, such as tumor classification, and pass legal reasoner tests like Minnesota Law School exams, the human-AI fairness gap may continue to narrow. In some cases, advanced AI decisions are seen as fairer than human judicial decisions, suggesting that future AI judging developments might result in AI proceedings being generally perceived as fairer than human proceedings.

AI-driven legal services are gaining traction, with platforms like LegalZoom providing consumer-level automated legal services. AI has the potential to reduce human bias, emotion, and error in legal settings, addressing the access-to-justice gap experienced by low-income Americans. University of Toronto Professor Gillian K. Hadfield states that AI reduces cost and helps address the access to justice crisis. However, she also acknowledges that more work is needed before AI becomes common in courthouses due to the laws intolerance for technical errors.

Blockchain technology is also making its way into legal services. Public blockchains offer transparency, trust, and tamper-free ledgers, with strengths like traceability and decentralization complementing AI to generate trust and provide valuable information about origin and history. Smart contracts are expected to play a role in the evolving legal system, with many commercial contracts likely to be written as smart contracts in the near future. 2Decentralized justice systems, such as Kleros, use blockchain-based arbitration solutions with smart contracts and crowdsourced jurors.

Improving dataset transparency is one way to address the problem of harsh AI judgements. If researchers know how data were gathered, they can ensure the data are used appropriately. Another possible strategy is transfer learning fine-tuning a descriptively trained model on a small amount of normative data. This approach, as well as exploring real-world contexts like medical diagnosis, financial auditing, and legal judgments, could help researchers ensure that AI models accurately capture human decision-making and avoid negative consequences.

In conclusion, AI models making harsher judgements on rule violations due to descriptive training data instead of normative data can have real-world implications, such as stricter judicial sentences and potential negative impacts. Researchers suggest improving dataset transparency, matching training context to deployment context, and exploring real-world applications to ensure AI models accurately replicate human decision-making.

More:

Harsh AI judgements: The impact of training data - Innovation Origins

To understand AI’s problems look at the shortcuts taken to create it – EastMojo

A machine can only do whatever we know how to order it to perform, wrote the 19th-century computing pioneer Ada Lovelace. This reassuring statement was made in relation to Charles Babbages description of the first mechanical computer.

Lady Lovelace could not have known that in 2016, a program called AlphaGo, designed to play and improve at the board game Go, would not only be able to defeat all of its creators, but would do it in ways that they could not explain.

Opt out orcontact usanytime. See ourPrivacy Policy

In 2023, the AI chatbot ChatGPT is taking this to another level, holding conversations in multiple languages, solving riddles and even passing legal and medical exams. Our machines are now able to do things that we, their makers, do not know how to order them to do.

This has provoked both excitement and concern about the potential of this technology. Our anxiety comes from not knowing what to expect from these new machines, both in terms of their immediate behaviour and of their future evolution.

We can make some sense of them, and the risks, if we consider that all their successes, and most of their problems, come directly from the particular recipe we are following to create them.

The reason why machines are now able to do things that we, their makers, do not fully understand is because they have become capable of learning from experience. AlphaGo became so good by playing more games of Go than a human could fit into a lifetime. Likewise, no human could read as many books as ChatGPT has absorbed.

Its important to understand that machines have become intelligent without thinking in a human way. This realisation alone can greatly reduce confusion, and therefore anxiety.

ADVERTISEMENT

CONTINUE READING BELOW

Intelligence is not exclusively a human ability, as any biologist will tell you, and our specific brand of it is neither its pinnacle nor its destination. It may be difficult to accept for some, but intelligence has more to do with chickens crossing the road safely than with writing poetry.

In other words, we should not necessarily expect machine intelligence to evolve towards some form of consciousness. Intelligence is the ability to do the right thing in unfamiliar situations, and this can be found in machines, for example those that recommend a new book to a user.

If we want to understand how to handle AI, we can return to a crisis that hit the industry from the late 1980s, when many researchers were still trying to mimic what we thought humans do. For example, they were trying to understand the rules of language or human reasoning, to program them into machines.

That didnt work, so they ended up taking some shortcuts. This move might well turn out to be one of the most consequential decisions in our history.

The first shortcut was to rely on making decisions based on statistical patterns found in data. This removed the need to actually understand the complex phenomena that we wanted the machines to emulate, such as language. The auto-complete feature in your messaging app can guess the next word without understanding your goals.

ADVERTISEMENT

CONTINUE READING BELOW

While others had similar ideas before, the first to make this method really work, and stick, was probably Fredrick Jelinek at IBM, who invented statistical language models, the ancestors of all GPTs, while working on machine translation.

In the early 1990s, he summed up that first shortcut by quipping: Whenever I fire a linguist, our systems performance goes up. Though the comment may have been said jokingly, it reflected a real-world shift in the focus of AI away from attempts to emulate the rules of language.

This approach rapidly spread to other domains, introducing a new problem: sourcing the data necessary to train statistical algorithms.

Creating the data specifically for training tasks would have been expensive. A second shortcut became necessary: data could be harvested from the web instead.

As for knowing the intent of users, such as in content recommendation systems, a third shortcut was found: to constantly observe users behaviour and infer from it what they might click on.

ADVERTISEMENT

CONTINUE READING BELOW

By the end of this process, AI was transformed and a new recipe was born. Today, this method is found in all online translation, recommendations and question-answering tools.

For all its success, this recipe also creates problems. How can we be sure that important decisions are made fairly, when we cannot inspect the machines inner workings?

How can we stop machines from amassing our personal data, when this is the very fuel that makes them operate? How can a machine be expected to stop harmful content from reaching users, when it is designed to learn what makes people click?

It doesnt help that we have deployed all this in a very influential position at the very centre of our digital infrastructure, and have delegated many important decisions to AI.

For instance, algorithms, rather than human decision makers, dictate what were shown on social media in real time. In 2022, the coroner who ruled on the tragic death of 14-year-old Molly Russell partly blamed an algorithm for showing harmful material to the child without being asked to.

ADVERTISEMENT

CONTINUE READING BELOW

As these concerns derive from the same shortcuts that made the technology possible, it will be challenging to find good solutions. This is also why the initial decisions of the Italian privacy authority to block ChatGPT created alarm.

Initially, the authority raised the issues of personal data being gathered from the web without a legal basis, and of the information provided by the chatbot containing errors. This could have represented a serious challenge to the entire approach, and the fact that it was solved by adding legal disclaimers, or changing the terms and conditions, might be a preview of future regulatory struggles.

Dear Reader, Over the past four years, EastMojo revolutionised the coverage of Northeast India through our sharp, impactful, and unbiased overage. And we are not saying this: you, our readers, say so about us. Thanks to you, we have become Northeast Indias largest, independent, multimedia digital news platform.Now, we need your help to sustain what you started.We are fiercely protective of our independent status and would like to remain so: it helps us provide quality journalism free from biases and agendas. From travelling to the remotest regions to cover various issues to paying local reporters honest wages to encourage them, we spend our money on where it matters.Now, we seek your support in remaining truly independent, unbiased, and objective. We want to show the world that it is possible to cover issues that matter to the people without asking for corporate and/or government support. We can do it without them; we cannot do it without you.Support independent journalism, subscribe to EastMojo.

Thank you,Karma PaljorEditor-in-Chief,eastmojo.com

We need good laws, not doomsaying. The paradigm of AI shifted long ago, but it was not followed by a corresponding shift in our legislation and culture. That time has now come.

An important conversation has started about what we should want from AI, and this will require the involvement of different types of scholars. Hopefully, it will be based on the technical reality of what we have built, and why, rather than on sci-fi fantasies or doomsday scenarios.

Nello Cristianini, Professor of Artificial Intelligence, University of Bath

ADVERTISEMENT

CONTINUE READING BELOW

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Also Read | Peanut butter is a liquid: the physics of this and other oddfluids

Like Loading...

Related

Latest Stories

Read the original:
To understand AI's problems look at the shortcuts taken to create it - EastMojo

A New Dawn of Legal Technology – Global Banking And Finance Review

In the mid-1980s, a bright red computer terminal that provided lawyers with online access to case law was an iconic status symbol. The UBIQ terminal hooked lawyers up to the Lexis service, at the time one of the first legal technology systems, using full-text search capabilities to provide rapid access to information. The conventional wisdom at that time was that computers were soon going to make extensive legal libraries and paper obsolete.

Thirty-five years on, what has happened? After years of chronic underinvestment in technology and embracing data and systems at the core of the digital agenda, lawyers remain comfortable with paper. Legal documentation has not been effectively digitised and, as a result, legal teams continue to wrestle with data management.

With the next inflexion point in legal technology evolution, including artificial intelligence and smart contracts, fast arriving, Eric Mueller, Chief Operating Officer and Managing Director, D2 Legal Technology and early Lexis developer, asks: Is the legal function finally ready to embrace legal tech and unlock tangible business value?

Senior lawyers have been lampooned for years for their lack of technology confidence. Yet each subsequent generation has failed to embrace the innovation in legal tech that could and should have transformed the industry. Thirty-five years ago, however, lawyers were ahead of the curve. A decade before the commercial arrival of the Internet, easy to use browsers and intuitive search engines, there was huge excitement surrounding Lexis, one the first legal technology products.

In 1987, I was a system developer on the advanced technology team of Lexis, and it appeared the legal environment was ready for significant change. The service provided access to all legal case law and it was highly intuitive. Development focused on natural language processing (NLP) a subject still debated today as chatbots become more sophisticated. It included hypertext in an era long before browsers, and used powerful full text search based on key words with Boolean operators, with search results presented in the by relevance list, now ubiquitous with any search engine.

In contrast to todays information services, it was expensive. With no commercially available access to the Internet, Lexis had to accessed through dedicated terminals. The price was per search, putting pressure on users to be very focused on their search terms. The clever decision to use iconic 1980s design to create the red UBIQ terminal, combined with the premium price, heightened the services status: the presence of a UBIQ on a lawyers desk was a sure sign of success. High profile senior lawyers saw legal technology as the future.

What happened? When lawyers had online access to case law thirty-five years ago, it seems utterly astonishing that the adoption of legal technology has been so limited ever since. This was a tool delivering tangible value. It was revolutionary, replacing the expensive libraries of case law books that required regular updates. It removed the need for dedicated librarians and time-consuming manual search. And while it was expensive, it was a cost that could be both charged back to clients and justifiable in the removal of the paper-based case law libraries.

Lexis completely changed the way lawyers search for information. One of the services even included the ability to construct a dedicated library that could host any type of document, leveraging Lexis ground breaking, full-text search capabilities, and thereby enabling the storage of legal agreements in the database. So why did the adoption of legal technology fail to evolve? Lawyers were slow to adopt PCs and they were behind the curve in mobile technology. Many legal teams still lack access to fully digitised records and their concerns relating the potential use of smart contracts are linked to a lack of widespread digitisation.

It is extraordinary to consider what the legal industry could have achieved if the early adoption of legal tech had not stalled. Sadly, rather than being innovative and embracing the potential of digital records, the industry has underinvested in both legal technology and good data management for the past three decades. Generation after generation of lawyers have failed to take advantage of the power of legal tech to improve client services, reduce risk and enhance efficiency.

The impact of this lack of investment is evident in every legal environment from the financial services in-house teams to law firms. These organisations continue to struggle to manage large volumes of legal documentation, resulting in additional risk, cost and a loss of productivity. Yet the technology has been available, proven and trusted for decades.

Lawyers have, quite simply, failed to step up and make the business case for investment in legal tech. While other industries have forged ahead, lawyers have accepted the status quo, continued to rely on paper-based records and time-consuming manual processes. Opportunity after opportunity to improve the efficiency and effectiveness of the legal function has been missed.

Now, however, it is becoming essential to take a far more proactive approach and truly understand the power of legal tech. Debates about the disruptive technologies such as AI and chatbots are increasingly placing a spotlight on the role of the lawyer of the future. But how can any legal function consider this next wave of tech innovation when still reliant on outdated processes and undigitised information resources? It is now vital that lawyers step away from the paper-based comfort zone, explore the benefits of legal tech and actively make the business case for investment.

The world has changed massively since 1987 and the rate of change is increasing. It is completely unacceptable that legal functions are not effectively using mature legal tech to improve data quality and accessibility or to support automated processes and de-risk operations. Document digitisation is now a fundamental requirement, not only to meet current demands but to also ensure the legal function is best placed to respond to the challenges of the near future.

The legal industry missed a compelling opportunity to build on early innovation and it cannot afford to roll the clock forward another 35 years without making vital investment in embracing the increasingly digital world.

See the article here:

A New Dawn of Legal Technology - Global Banking And Finance Review

Ape Brigade: Best Long-term Crypto Project – The Cryptonomist

SPONSORED POST*

Considering all the various speculations in the crypto scene, Cardano (ADA), XRP (XRP), and Ape Brigade (APES) have emerged as the best long-term crypto investments that users and traders can consider in 2023 to reap extended gains and benefits.

Recently, the long-awaited launch of Hydra Head took place on the Cardano blockchains mainnet, presenting a scaling solution designed to expedite transactions. Optimistic members within the Cardano community speculate that this development could potentially enable the network to handle an impressive traffic of 1 million transactions per second (TPS), the potential to be the fastest TPS in the blockchain space.

Ape Brigade (APES) is an upcoming meme coin that is generating excitement in the cryptocurrency market. With a strong focus on animal welfare, particularly the conservation of apes and wildlife, Ape Brigade dedicates 10% of its token supply to support these efforts. Furthermore, the community stands to benefit from a locked liquidity pool of 20%, ensuring stability and liquidity for the token.

Additionally, an incentivized staking system allows users to earn rewards based on their APES holdings and staking duration, with 15% of the total supply reserved exclusively for staking rewards. Ape Brigade operates as an ERC20 token on the Ethereum blockchain, guaranteeing security, transparency, and accessibility for all.

The projects roadmap includes exciting plans such as the introduction of an NFT Space, which will play a vital role in their ecosystem. With a passionate community, a charitable mission, and advanced technology, Ape Brigade (APES) is poised to make a remarkable impact in the world of cryptocurrencies.

Cardano is a blockchain platform that aims to establish a secure and scalable foundation for decentralized applications and smart contracts. Its primary objective is to create a reliable infrastructure that enables the development of innovative and decentralized solutions.

The launch of Hydra Head follows a series of significant improvements to Cardano (ADA) throughout the year, with a notable focus on decentralized finance (DeFi). Addressing the narratives surrounding the release of L2 Hydra on the mainnet, Cardanos Technical Director, Matthias Benkort, offered his insights and clarified that the newly introduced scaling solution, Hydra, is presently incapable of handling 1 million TPS.

Each Hydra Head functions as a decentralized mini ledger shared among a small group of users. Besides facilitating faster transactions, this approach also helps reduce costs as well. Developers can utilize Hydra Heads on the Cardano blockchain to build intricate DeFi protocols, which enables the creation of advanced and sophisticated financial applications within the Cardano ecosystem.

XRP (XRP) is a digital asset and cryptocurrency that operates on the Ripple payment protocol. It is designed for fast and low-cost international money transfers and is utilized by banks and financial institutions for cross-border transactions. XRPs main goal is to facilitate efficient global money transfers and improve liquidity in the financial industry.

In conclusion, Cardanos recent denial of speculation regarding 1 million transactions per second (TPS) post Hydra Head deployment clarifies the current capabilities of the ADA network. While the launch of Hydra Head brings enhanced scalability to Cardano, it is important to refer to official sources for accurate information.

Meanwhile, if you are looking for the best long-term crypto investments in 2023, Cardano, XRP, and Ape Brigade (APES) stand out. Cardano continues to revolutionize DeFi with its advanced features and scalability, while XRP maintains its position as a reliable choice for seamless cross-border transactions. Ape Brigade (APES) captures attention with its passionate community, commitment to wildlife conservation, and advanced technology as it enters the market with promising potential.

Website: https://apebrigade.io/

Twitter: https://twitter.com/_ApeBrigade_

Telegram: https://t.me/ApeBrigadeOfficial

*This article was paid for Cryptonomist did not write the article or test the platform.

Related postsMore from author

Here is the original post:

Ape Brigade: Best Long-term Crypto Project - The Cryptonomist