Archive for the ‘Machine Learning’ Category

Research Fellow in Machine Learning for Construction job with NORTHUMBRIA UNIVERSITY | 195895 – Times Higher Education (THE)

Research Fellow in Machine Learning for ConstructionFaculty of Engineering and Environment

Applications are invited for a Research Fellow in Machine Learning for Construction to contribute to an exciting industry-led project for the development of an AI-driven and real-time command and control centre for site equipment in infrastructure projects. This project follows a successfully completed feasibility project hat tested the concept of improving site equipment in through the application IoT, AI, and BIM. This position is funded by Innovate UK and you will work with a leading team of academics and award-winning digital construction businesses.

As a successful candidate, you will lead the development and the implementation of machine learning and data analytics capabilities of the platform. In particular, you will implement AI techniques for the estimation of the productivity of site equipment, development of predictive schedules, and generation of benchmark data for earthwork.

The ideal candidate will have experience in developing data pipelines within cloud-based platforms such as AWS, Google Cloud or Azure, particularly in the area of IoT, scalable database design, data processing and machine learning. Experience in developing machine learning applications using high level programming languages such as Tensorflow, Python and R within open-source platforms such as Jupyter or Conda is essential.

For an informal discussion about the post, please contact Assoc. Prof Dr. Mohamad Kassem by e-mail mohamad.kassem@northumbria.ac.uk

To apply for this vacancy please click 'Apply Now', and submit a Covering Letter and your CV, including a full list of publications where relevant and any documents specifically requested in the Role Description and Person Specification, such as a sample of written work or journal article.

Northumbria University takes pride in, and values, the quality and diversity of our staff. We welcome applications from all members of the community. The University holds an Athena SWAN Bronze award in recognition of our commitment to improving employment practices for the advancement of gender equality and is a member of the Euraxess network, which delivers information and support to professional researchers.

Please note this vacancy will close on 08/03/2020

Read more:
Research Fellow in Machine Learning for Construction job with NORTHUMBRIA UNIVERSITY | 195895 - Times Higher Education (THE)

Top Machine Learning Projects Launched By Google In 2020 (Till Date) – Analytics India Magazine

It may be that time of the year when new year resolutions start to fizzle, but Google seems to be just getting started.The tech giant has been building tools and services to bring in the benefits of artificial intelligence (AI) to its users. The company has begun upping its arsenal of AI-powered products with a string of new releases this month alone.

Here is a list of the top products launched by Google in January 2020.

Although first introduced in 2014, the latest iterations of sequence-to-sequence (seq2seq) AI models have strengthened the capability of key text-generating tasks including sentence formation and grammar correction. Googles LaserTagger, which the company has open-sourced, speeds up the text generation process and reduces the chances of errors

Compared to traditional seq2seq methods, LaserTagger computes predictions up to 100 times faster, making it suitable for real-time applications. Furthermore, it can be plugged into an existing technology stack without adding any noticeable latency on the user side because of its high inference speed. These advantages become even more pronounced when applied at a large scale.

The company has expanded its Coral lineup by unveiling two new Coral AI products Coral Dev Board Mini and Coral Accelerator Module. Announced ahead of the Consumer Electronics Show (CES) this year, the latest addition to the Coral family followed a successful beta run of the platform in October 2019.

The Coral Accelerator Module is a multi-chip package that encapsulates the companys custom-designed Edge Tensor Processing Unit (TPU). The chip inside the Coral Dev Board is designed to execute multiple computer vision models at 30 frames per second or a single model at over 100fps. Users of this technology have said that it is easy to integrate into custom PCB designs.

Coral Accelerator Module, a new multi-chip module with Google Edge TPU.

Google has also released the Coral Dev Board Mini which provides a smaller form-factor, lower-power, and a cost-effective alternative to the Coral Dev Board.

Caption: The Coral Dev Board Mini is a cheaper, smaller and lower power version of the Coral Dev Board

Officially announced in March 2019, the Coral products were intended to help developers work more efficiently by reducing their reliance on connections to cloud-based systems by creating AI that works locally.

Chatbots are one of the hottest trends in AI owing to its tremendous growth in applications. Google has added to the mix with its human-like multi-turn open-domain version. Meena has been trained in an end-to-end fashion on data mined from social media conversations held in the public domain with a totalling 300GB+ text data. Furthermore, it is massive in size with 2.6B parameter neural network and has been trained to minimize perplexity of the next token.

Furthermore, Googles human evaluation metric called Sensibleness and Specificity Average (SSA) also captures the key elements of a human-like multi-turn conversation, making this chatbot even more versatile. In a blog post, Google had claimed that Meena can conduct conversations that are more sensible and specific than existing state-of-the-art chatbots.

Plugged as an important development of Googles Transformer the novel neural network architecture for language understanding Reformer is intended to handle context windows of up to 1 million words, all on a single AI accelerator using only 16GB of memory.

Google had first mooted the idea of a new transformer model in a research paper in collaboration with UC Berkeley in 2019. The core idea behind this model was self-attention, and the ability to attend to different positions of an input sequence to compute a representation of that sequence elaborated in one of our articles.

Today, Reformer can process whole books concurrently and that too on a single gadget, thereby exhibiting great potential.

Google has time and again reiterated its commitment to the development of AI. Seeing it as more profound than fire or electricity, it firmly believes that this technology can eliminate many of the constraints we face today.

The company has also delved into research anchored around AI that is spread across a host of sectors, whether it be detecting breast cancer or protecting whales or other endangered species.

comments

The rest is here:
Top Machine Learning Projects Launched By Google In 2020 (Till Date) - Analytics India Magazine

AIOps: What You Need To Know – Forbes

Digital Human Brain Covered with Networks

AIOps, which is a term that was coined by Gartner in 2017, is increasingly becoming a critical part of next-generation IT.In a nutshell, AIOps is applying cognitive computing like AI and Machine learning techniques to improve IT operations, said Adnan Masood, who is the Chief Architect of AI & Machine Learning at UST Global.This is not to be confused with the entirely different discipline of MLOps, which focuses on the Machine learning operationalization pipeline. AIOps refers to the spectrum of AI capabilities used to address IT operations challengesfor example, detecting outliers and anomalies in the operations data, identifying recurring issues, and applying self-identified solutions to proactively resolve the problem, such as by restarting the application pool, increasing storage or compute, or resetting the password for a locked-out user.

The fact is that IT departments are often stretched and starved for resources.Traditional tools have usually been rule-based and inflexible, which has made it difficult to deal with the flood of new technologies.

IT teams have adopted microservices, cloud providers, NoSQL databases, and various other engineering and architectural approaches to help support the demands their businesses are putting on them, said Shekhar Vemuri, who is the CTO of Clairvoyant.But in this rich, heterogeneous, distributed, complex world, it can be a challenge to stay on top of vast amounts of machine-generated data from all these monitoring, alerting and runtime systems. It can get extremely difficult to understand the interactions between various systems and the impact they are having on cost, SLAs, outages etc.

So with AIOps, there is the potential for achieving scale and efficiencies. Such benefits can certainly move the needle for a company, especially as IT has become much more strategic.

From our perspective, AIOps equips IT organizations with the tools to innovate and remain competitive in their industries, effectively managing infrastructure and empowering insights across increasingly complex hybrid and multi-cloud environments, said Ross Ackerman, who is the NetApp Director of Analytics and Transformation.This is accomplished through continuous risk assessments, predictive alerts, and automated case opening to help prevent problems before they occur. At NetApp, were benefiting from a continuously growing data lake that was established over a decade ago. It was initially used for reactive actions, but with the introduction of more advanced AI and ML, it has evolved to offer predictive and prescriptive insights and guidance. Ultimately, our capabilities have allowed us to save customers over two million hours of lost productivity due to avoided downtime.

As with any new approach, though, AIOps does require much preparation, commitment and monitoring.Lets face it, technologies like AI can be complex and finicky.

The algorithms can take time to learn the environment, so organizations should seek out those AIOps solutions that also include auto-discovery and automated dependency mapping as these capabilities provide out-of-the-box benefits in terms of root-cause diagnosis, infrastructure visualization, and ensuring CMDBs are accurate and up-to-date, said Vijay Kurkal, who is the CEO of Resolve.These capabilities offer immediate value and instantaneous visibility into whats happening under the hood, with machine learning and AI providing increasing richness and insights over time.

As a result, there should be a clear-cut framework when it comes to AIOps.Heres what Appens Chief AI Evangelist Alyssa Simpson Rochwerger recommends:

All this requires a different mindset.Its really about looking at things in terms of software application development.

Most enterprise businesses are struggling with a wall to production, and need to start realizing a return on their machine learning and AI investments, said Santiago Giraldo, who is a Senior Product Marketing Manager at Cloudera.The problem here is two-fold. One issue is related to technology: Businesses must have a complete platform that unifies everything from data management to data science to production. This includes robust functionalities for deploying, serving, monitoring, and governing models. The second issue is mindset: Organizations need to adopt a production mindset and approach machine learning and AI holistically in everything from data practices to how the business consumes and uses the resulting predictions.

So yes, AIOps is still early days and there will be lots of trial-and-error.But this approach is likely to be essential.

While the transformative promise of AI has yet to materialize in many parts of the business, AIOps offers a proven, pragmatic path to improved service quality, said Dave Wright, who is the Chief Innovation Officer at ServiceNow.And since it requires little overhead, its a great pilot for other AI initiatives that have the potential to transform a business.

Tom (@ttaulli) is the author of the book,Artificial Intelligence Basics: A Non-Technical Introduction, as well as the upcoming book, The Robotic Process Automation Handbook: A Guide to Implementing RPA Systems.

Here is the original post:
AIOps: What You Need To Know - Forbes

New cybersecurity system protects networks with LIDAR, no not that LiDAR – C4ISRNet

When it comes to identifying early cyber threats, its important to have laser-like precision. Mapping out a threat environment can be done with a range of approaches, and a team of researchers from Purdue University created a new system for just such applications. They are calling that approach LIDAR, or lifelong, intelligent, diverse, agile and robust.

This is not to be confused with LiDAR, for Light Detection and Ranging, a kind of remote sensing system that uses laser pulses to measure distances from the sensor. The light-specific LiDAR, sometimes also written LIDAR, is a valuable tool for remote sensing and mapping, and features prominently in the awareness tools of self-driving vehicles.

Purdues LIDAR, instead, is a kind of architecture for network security. It can adapt to threats, thanks in part to its ability to learn three ways. These include supervised machine learning, where an algorithm looks at unusual features in the system and compares them to known attacks. An unsupervised machine learning component looks through the whole system for anything unusual, not just unusual features that resemble attacks. These two machine-learning components are mediated by a rules-based supervisor.

One of the fascinating things about LIDAR is that the rule-based learning component really serves as the brain for the operation, said Aly El Gamal, an assistant professor of electrical and computer engineering in Purdues College of Engineering. That component takes the information from the other two parts and decides the validity of a potential attack and necessary steps to move forward.

By knowing existing attacks, matching to detected threats, and learning from experience, this LIDAR system can potentially offer a long-term solution based on how the machines themselves become more capable over time.

Aiding the security approach, said the researchers, is the use of a novel curiosity-driven honeypot, which can like a carnivorous pitcher plant lure attackers and then trap them where they will do no harm. Once attackers are trapped, it is possible the learning algorithm can incorporate new information about the threat, and adapt to prevent future attacks making it through.

The research team behind this LIDAR approach is looking to patent the technology for commercialization. In the process, they may also want to settle on a less-confusing moniker. Otherwise, we may stumble into a future where users securing a network of LiDAR sensors with LIDAR have to enact an entire Whos on First? routine every time they update their cybersecurity.

Visit link:
New cybersecurity system protects networks with LIDAR, no not that LiDAR - C4ISRNet

New York Institute of Finance and Google Cloud launch a Machine Learning for Trading Specialisation on Coursera – HedgeWeek

The New York Institute of Finance (NYIF) and Google Cloud have launched a new Machine Learning for Trading Specialisation available exclusively on the Coursera platform.

The Specialisation helps learners leverage the latest AI and machine learning techniques for financial trading.

Amid the Fourth Industrial Revolution, nearly 80 per cent of financial institutions cite machine learning as a core component of business strategy and 75 per cent of financial services firms report investing significantly in machine learning. The Machine Learning for Trading Specialisation equips professionals with key technical skills increasingly needed in the financial industry today.

Composed of three courses in financial trading, machine learning, and artificial intelligence, the Specialisation features a blend of theoretical and applied learning. Topics include analysing market data sets, building financial models for quantitative and algorithmic trading, and applying machine learning in quantitative finance.

As we enter an era of unprecedented technological change within our sector, were proud to offer up-skilling opportunities for hedge fund traders and managers, risk analysts, and other financial professionals to remain competitive through Coursera, says Michael Lee, Managing Director of Corporate Development at NYIF. The past ten years have demonstrated the staying power of AI tools in the finance world, further proving the importance for both new and seasoned professionals to hone relevant tech skills.

The Specialisation is particularly suited for hedge fund traders, analysts, day traders, those involved in investment management or portfolio management, and anyone interested in constructing effective trading strategies using machine learning. Prerequisites include basic competency with Python, familiarity with pertinent libraries for machine learning, a background in statistics, and foundational knowledge of financial markets.

Cutting-edge technologies, such as machine and reinforcement learning, have become increasingly commonplace in finance, says Rochana Golani, Director, Google Cloud Learning Services. Were excited for learners on Coursera to explore the potential of machine learning within trading. Looking beyond traditional finance roles, were also excited for the Specialisation to support machine learning professionals seeking to apply their craft to quantitative trading strategies.

View post:
New York Institute of Finance and Google Cloud launch a Machine Learning for Trading Specialisation on Coursera - HedgeWeek