Archive for the ‘Artificial Intelligence’ Category

For Telangana, 2020 will be year of artificial intelligence – BusinessLine

With a view to promoting enterprises working on artificial intelligence solutions and taking leadership in this emerging technology space, the Telangana government has decided to observe 2020 as the Year of AI.

Telangana IT Minister KT Rama Rao will formally make the announcement on January 2 here, declaring 2020, the Year of AI, and release a calendar of events for the next 12 months.

The event will see signing of memorandum of agreements between the government and AI start-ups.

The Information and Technology Ministry is in the process of preparing a document with strategy framework to offer incentives exclusive to the AI initiatives.

We have come up with such documents for Blockchain and drones. With new technologies such as AI and Big Data Analytics expected to generate 8 lakh jobs in the country in the next two years, we will launch a dedicated programme for AI in 2020, Jayesh Ranjan, Principal Secretary, IT and Industries, Government of Telangana, has said.

See the original post:

For Telangana, 2020 will be year of artificial intelligence - BusinessLine

Who will really dominate artificial intelligence capabilities in the future? – Tech Wire Asia

The US is far ahead of everyone else but China is keen on taking the lead, soon. Source: Shutterstock

IN THE digital age, countries all around the world are racing to excel with artificial intelligent (AI) technology.

The phenomenon is not a surprise considering that that AI is undeniably a powerful solution with elaborate enterprise use across industries from medical algorithms to autonomous vehicles.

For a while now, the US has been dominating the global race in AI development and capabilities, but according to the Global AI Index, it seems like China will be dominating the field in the near future.

As the first runner up, it is expected that China will overtake the US in about 5 to 10 years, based on the countrys impressive growth records.

Based on 7 key indicators such as research, infrastructure, talent, development, operating environment, commercial ventures, and government strategy measured over the course of 12 months it looks like China is promoting growth unlike any other.

Although the US is prominently in the lead by a great margin, China has already materialized efforts to establish a bigger influence based on the countrys Next Generation Artificial Intelligence Development Plan which it launched in 2017.

Not only that, it is reported that China alone has promised to spend up to US$22 billion a mammoth figure compared to the global governmental AI spending estimated at US$35 billion throughout the next decade or so.

Nevertheless, China must recognize some areas that it needs to improve in order to successfully lead with AI.

Recording a 58.3 percent on the index, China seems to lack in terms of talent, commercial ventures, research quality, and private funding.

However, the country has still shown significant growth in various other areas. especially in the contribution of AI code. According to the worlds biggest open-source development platform, Github, China developers have contributed 13,000 AI codes to date.

This is a big jump compared to the initial count of 150 in 2015. The US, however, is still in the lead with a record of 42,000 contributions.

The need to dominate the AI market seems to be the motivation for countries around the world as the technology is a defining asset that can shift the dynamics of the global economy.

Other prominent countries to watch out for are the UK, Canada, and Germany, ranking 3rd, 4th, and 5th place consecutively.

Another Asian country making a mark in the 7th spot is Singapore, promoting a high score in talent but room for improvement in terms of its operative environment.

Despite the quick progress, experts hope that all countries looking to excel in AI will do so with ethical considerations and strategic leadership in mind.

More here:

Who will really dominate artificial intelligence capabilities in the future? - Tech Wire Asia

Fels backs calls to use artificial intelligence as wage-theft detector – The Age

"The amount of underpayment occurring now is so large that there is an effect on wages generally and on making life difficult for law-abiding employers."

Senator Sheldon said artificial intelligence could be used to detect discrepancies in payment data held by the Australian Taxation Office on employers in industries such as retail, hospitality, agriculture and construction.

"You could do it for wages and superannuation, with an algorithm used as a first flag for human intervention," he said.

The problems of underpayment are systemic and not readily resolvable just by strong law enforcement - even though that's vital.

Alistair Muir, chief executive of Sydney-based consultancy Vanteum, said it was possible to "train artificial intelligence algorithms across multiple data sets to detect wage theft as described by Senator Sheldon, without ever needing to move, un-encrypt or disclose the data itself".

Melbourne University associate professor of computing Vanessa Teague said a "simple computer program" could be designed to detect evidence of wage underpayment using the rules laid out in the award system, but that any such project should safeguard workers' privacy by requiring informed consent.

Industrial Relations Minister Christian Porter did not rule out introducing data matching as part of his wage theft crackdown and said workplace exploitation "will not be tolerated by this government".

Mr Porter said the government accepted "in principle" the recommendations of the migrant worker taskforce which included taking a "whole of government" approach and giving the Fair Work Ombudsman expanded information gathering powers.

The taskforce report said inter-governmental information sharing was "an important avenue" for identifying wage under payment and could be used to "support successful prosecutions".

In the latest case of alleged wage underpayment in the hospitality industry, the company behind the Crown casino eatery fronted by celebrity chef Heston Blumenthal, Dinner by Heston, this week applied to be wound up after failing to comply with a statutory notice requiring it to back pay staff for unpaid overtime.

It follows revelations of underpayments totalling hundreds of millions of dollars by employers including restauranteur George Calombaris' Made Establishment, Qantas, Coles, Commonwealth Bank, Bunnings, Super Retail Group and the Australian Broadcasting Corporation.

Professional services firm PwC has estimated that employers are underpaying Australian workers by $1.4 billion a year, affecting 13 per cent of the nation's workforce.

AI Group chief executive Innes Willox said the employer peak body did not "see a need" for increased governmental data collection powers.

Australian Retail Association president Russell Zimmerman said retailers were not inherently opposed to data matching as employers who paid workers correctly had "nothing to fear" but was unsure how effective or accurate the approach would be.

"We don't support wage theft," Mr Zimmerman said.

He blamed the significant underpayments self-reported in recent months on difficulties navigating the "complex" retail award.

Senator Sheldon rejected this argument, saying the system was "only complicated if you don't want to pay".

"You get paid for eight hours, then after that you get overtime and you get weekend penalty rates," he said.

Australian Council of Trade Unions assistant secretary Liam OBrien said the workplace law system was "failing workers who are suffering from systemic wage theft".

The minister, who is consulting unions and business leaders on the detail of his wage theft bill including what penalty should apply if employers fail to prevent accidental underpayment said the draft legislation should be released "early in the new year".

Dana is health and industrial relations reporter for The Sydney Morning Herald and The Age.

Go here to see the original:

Fels backs calls to use artificial intelligence as wage-theft detector - The Age

China should step up regulation of artificial intelligence in finance, think tank says – msnNOW

Jason Lee/REUTERS A Chinese flag flutters in front of the Great Hall of the People in Beijing, China, May 27, 2019. REUTERS/Jason Lee

QINGDAO, China/BEIJING (Reuters) - China should introduce a regulatory framework for artificial intelligence in the finance industry, and enhance technology used by regulators to strengthen industry-wide supervision, policy advisers at a leading think tank said on Sunday.

"We should not deify artificial intelligence as it could go wrong just like any other technology," said the former chief of China's securities regulator, Xiao Gang, who is now a senior researcher at the China Finance 40 Forum.

"The point is how we make sure it is safe for use and include it with proper supervision," Xiao told a forum in Qingdao on China's east coast.

Technology to regulate intelligent finance - referring to banking, securities and other financial products that employ technology such as facial recognition and big-data analysis to improve sales and investment returns - has largely lagged development, showed a report from the China Finance 40 Forum.

Evaluation of emerging technologies and industry-wide contingency plans should be fully considered, while authorities should draft laws and regulations on privacy protection and data security, the report showed.

Lessons should be learned from the boom and bust of the online peer-to-peer (P2P) lending sector where regulations were not introduced quickly enough, said economics professor Huang Yiping at the National School of Development of Peking University.

China's P2P industry was once widely seen as an important source of credit, but has lately been undermined by pyramid-scheme scandals and absent bosses, sparking public anger as well as a broader government crackdown.

"Changes have to be made among policy makers," said Zhang Chenghui, chief of the finance research bureau at the Development Research Institute of the State Council.

"We suggest regulation on intelligent finance to be written in to the 14th five-year plan of the country's development, and each financial regulator - including the central bank, banking and insurance regulators and the securities watchdog - should appoint its own chief technology officer to enhance supervision of the sector."

Zhang also suggested the government brings together the data platforms of each financial regulatory body to better monitor potential risk and act quickly as problems arise.

(Reporting by Cheng Leng in Qingdao, China, and Ryan Woo in Beijing; Editing by Christopher Cushing)

See the rest here:

China should step up regulation of artificial intelligence in finance, think tank says - msnNOW

In 2020, lets stop AI ethics-washing and actually do something – MIT Technology Review

Last year, just as I was beginning to cover artificial intelligence, the AI world was getting a major wake-up call. There were some incredible advancements in AI research in 2018from reinforcement learning to generative adversarial networks (GANs) to better natural-language understanding. But the year also saw several high-profile illustrations of the harm these systems can cause when they are deployed too hastily.

A Tesla crashed on Autopilot, killing the driver, and a self-driving Uber crashed, killing a pedestrian. Commercial face recognition systems performed terribly in audits on dark-skinned people, but tech giants continued to peddle them anyway, to customers including law enforcement. At the beginning of this year, reflecting on these events, I wrote a resolution for the AI community: Stop treating AI like magic, and take responsibility for creating, applying, and regulating it ethically.

In some ways, my wish did come true. In 2019, there was more talk of AI ethics than ever before. Dozens of organizations produced AI ethics guidelines; companies rushed to establish responsible AI teams and parade them in front of the media. Its hard to attend an AI-related conference anymore without part of the programming being dedicated to an ethics-related message: How do we protect peoples privacy when AI needs so much data? How do we empower marginalized communities instead of exploiting them? How do we continue to trust media in the face of algorithmically created and distributed disinformation?

Sign up for The Algorithm artificial intelligence, demystified

But talk is just thatits not enough. For all the lip service paid to these issues, many organizations AI ethics guidelines remain vague and hard to implement. Few companies can show tangible changes to the way AI products and services get evaluated and approved. Were falling into a trap of ethics-washing, where genuine action gets replaced by superficial promises. In the most acute example, Google formed a nominal AI ethics board with no actual veto power over questionable projects, and with a couple of members whose inclusion provoked controversy. A backlash immediately led to its dissolution.

Meanwhile, the need for greater ethical responsibility has only grown more urgent. The same advancements made in GANs in 2018 have led to the proliferation of hyper-realistic deepfakes, which are now being used to target women and erode peoples belief in documentation and evidence. New findings have shed light on the massive climate impact of deep learning, but organizations have continued to train ever larger and more energy-guzzling models. Scholars and journalists have also revealed just how many humans are behind the algorithmic curtain. The AI industry is creating an entirely new class of hidden laborerscontent moderators, data labelers, transcriberswho toil away in often brutal conditions.

But not all is dark and gloomy: 2019 was the year of the greatest grassroots pushback against harmful AI from community groups, policymakers, and tech employees themselves. Several citiesincluding San Francisco and Oakland, California, and Somerville, Massachusettsbanned public use of face recognition, and proposed federal legislation could soon ban it from US public housing as well. Employees of tech giants like Microsoft, Google, and Salesforce also grew increasingly vocal against their companies use of AI for tracking migrants and for drone surveillance.

Within the AI community, researchers also doubled down on mitigating AI bias and reexamined the incentives that lead to the fields runaway energy consumption. Companies invested more resources in protecting user privacy and combating deepfakes and disinformation. Experts and policymakers worked in tandem to propose thoughtful new legislationmeant to rein in unintended consequences without dampening innovation. At the largest annual gathering in the field this year, I was both touched and surprised by how many of the keynotes, workshops, and posters focused on real-world problemsboth those created by AI and those it could help solve.

So here is my hope for 2020: that industry and academia sustain this momentum and make concrete bottom-up and top-down changes that realign AI development. While we still have time, we shouldnt lose sight of the dream animating the field. Decades ago, humans began the quest to build intelligent machines so they could one day help us solve some of our toughest challenges.

AI, in other words, is meant to help humanity prosper. Lets not forget.

To have more stories like this delivered directly to your inbox,sign upfor our Webby-nominated AI newsletter The Algorithm. It's free.

Visit link:

In 2020, lets stop AI ethics-washing and actually do something - MIT Technology Review