Archive for the ‘Alphago’ Category

DeepMind aims to marry deep learning and classic algorithms – VentureBeat

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!

Will deep learning really live up to its promise? We dont actually know. But if its going to, it will have to assimilate how classical computer science algorithms work. This is what DeepMind is working on, and its success is important to the eventual uptake of neural networks in wider commercial applications.

Founded in 2010 with the goal of creating AGI artificial general intelligence, a general purpose AI that truly mimics human intelligence DeepMind is on the forefront of AI research. The company is also backed by industry heavyweights like Elon Musk and Peter Thiel.

Acquired by Google in 2014, DeepMind has made headlines for projects such as AlphaGo, a program that beat the world champion at the game of Go in a five-game match, and AlphaFold, which found a solution to a 50-year-old grand challenge in biology.

Now DeepMind has set its sights on another grand challenge: bridging the worlds of deep learning and classical computer science to enable deep learning to do everything. If successful, this approach could revolutionize AI and software as we know them.

Petar Velikovi is a senior research scientist at DeepMind. His entry into computer science came through algorithmic reasoning and algorithmic thinking using classical algorithms. Since he started doing deep learning research, he has wanted to reconcile deep learning with the classical algorithms that initially got him excited about computer science.

Meanwhile, Charles Blundell is a research lead at DeepMind who is interested in getting neural networks to make much better use of the huge quantities of data theyre exposed to. Examples include getting a network to tell us what it doesnt know, to learn much more quickly, or to exceed expectations.

When Velikovi met Blundell at DeepMind, something new was born: a line of research that goes by the name of Neural Algorithmic Reasoning (NAR), after a position paper the duo recently published.

NAR traces the roots of the fields it touches upon and branches out to collaborations with other researchers. And unlike much pie-in-the-sky research, NAR has some early results and applications to show for itself.

Velikovi was in many ways the person who kickstarted the algorithmic reasoning direction in DeepMind. With his background in both classical algorithms and deep learning, he realized that there is a strong complementarity between the two of them. What one of these methods tends to do really well, the other one doesnt do that well, and vice versa.

Usually when you see these kinds of patterns, its a good indicator that if you can do anything to bring them a little bit closer together, then you could end up with an awesome way to fuse the best of both worlds, and make some really strong advances, Velikovi said.

When Velikovi joined DeepMind, Blundell said, their early conversations were a lot of fun because they have very similar backgrounds. They both share a background in theoretical computer science. Today, they both work a lot with machine learning, in which a fundamental question for a long time has been how to generalize how do you work beyond the data examples youve seen?

Algorithms are a really good example of something we all use every day, Blundell noted. In fact, he added, there arent many algorithms out there. If you look at standard computer science textbooks, theres maybe 50 or 60 algorithms that you learn as an undergraduate. And everything people use to connect over the internet, for example, is using just a subset of those.

Theres this very nice basis for very rich computation that we already know about, but its completely different from the things were learning. So when Petar and I started talking about this, we saw clearly theres a nice fusion that we can make here between these two fields that has actually been unexplored so far, Blundell said.

The key thesis of NAR research is that algorithms possess fundamentally different qualities to deep learning methods. And this suggests that if deep learning methods were better able to mimic algorithms, then generalization of the sort seen with algorithms would become possible with deep learning.

To approach the topic for this article, we asked Blundell and Velikovi to lay out the defining properties of classical computer science algorithms compared to deep learning models. Figuring out the ways in which algorithms and deep learning models are different is a good start if the goal is to reconcile them.

For starters, Blundell said, algorithms in most cases dont change. Algorithms are comprised of a fixed set of rules that are executed on some input, and usually good algorithms have well-known properties. For any kind of input the algorithm gets, it gives a sensible output, in a reasonable amount of time. You can usually change the size of the input and the algorithm keeps working.

The other thing you can do with algorithms is you can plug them together. The reason algorithms can be strung together is because of this guarantee they have: Given some kind of input, they only produce a certain kind of output. And that means that we can connect algorithms, feeding their output into other algorithms input and building a whole stack.

People have been looking at running algorithms in deep learning for a while, and its always been quite difficult, Blundell said. As trying out simple tasks is a good way to debug things, Blundell referred to a trivial example: the input copy task. An algorithm whose task is to copy, where its output is just a copy of its input.

It turns out that this is harder than expected for deep learning. You can learn to do this up to a certain length, but if you increase the length of the input past that point, things start breaking down. If you train a network on the numbers 1-10 and test it on the numbers 1-1,000, many networks will not generalize.

Blundell explained, They wont have learned the core idea, which is you just need to copy the input to the output. And as you make the process more complicated, as you can imagine, it gets worse. So if you think about sorting through various graph algorithms, actually the generalization is far worse if you just train a network to simulate an algorithm in a very naive fashion.

Fortunately, its not all bad news.

[T]heres something very nice about algorithms, which is that theyre basically simulations. You can generate a lot of data, and that makes them very amenable to being learned by deep neural networks, he said. But it requires us to think from the deep learning side. What changes do we need to make there so that these algorithms can be well represented and actually learned in a robust fashion?

Of course, answering that question is far from simple.

When using deep learning, usually there isnt a very strong guarantee on what the output is going to be. So you might say that the output is a number between zero and one, and you can guarantee that, but you couldnt guarantee something more structural, Blundell explained. For example, you cant guarantee that if you show a neural network a picture of a cat and then you take a different picture of a cat, it will definitely be classified as a cat.

With algorithms, you could develop guarantees that this wouldnt happen. This is partly because the kind of problems algorithms are applied to are more amenable to these kinds of guarantees. So if a problem is amenable to these guarantees, then maybe we can bring across into the deep neural networks classical algorithmic tasks that allow these kinds of guarantees for the neural networks.

Those guarantees usually concern generalizations: the size of the inputs, the kinds of inputs you have, and their outcomes that generalize over types. For example, if you have a sorting algorithm, you can sort a list of numbers, but you could also sort anything you can define an ordering for, such as letters and words. However, thats not the kind of thing we see at the moment with deep neural networks.

Another difference, which Velikovi noted, is that algorithmic computation can usually be expressed as pseudocode that explains how you go from your inputs to your outputs. This makes algorithms trivially interpretable. And because they operate over these abstractified inputs that conform to some preconditions and post-conditions, its much easier to reason theoretically about them.

That also makes it much easier to find connections between different problems that you might not see otherwise, Velikovi added. He cited the example of MaxFlow and MinCut as two problems that are seemingly quite different, but where the solution of one is necessarily the solution to the other. Thats not obvious unless you study it from a very abstract lens.

Theres a lot of benefits to this kind of elegance and constraints, but its also the potential shortcoming of algorithms, Velikovi said. Thats because if you want to make your inputs conform to these stringent preconditions, what this means is that if data that comes from the real world is even a tiny bit perturbed and doesnt conform to the preconditions, Im going to lose a lot of information before I can massage it into the algorithm.

He said that obviously makes the classical algorithm method suboptimal, because even if the algorithm gives you a perfect solution, it might give you a perfect solution in an environment that doesnt make sense. Therefore, the solutions are not going to be something you can use. On the other hand, he explained, deep learning is designed to rapidly ingest lots of raw data at scale and pick up interesting rules in the raw data, without any real strong constraints.

This makes it remarkably powerful in noisy scenarios: You can perturb your inputs and your neural network will still be reasonably applicable. For classical algorithms, that may not be the case. And thats also another reason why we might want to find this awesome middle ground where we might be able to guarantee something about our data, but not require that data to be constrained to, say, tiny scalars when the complexity of the real world might be much larger, Velikovi said.

Another point to consider is where algorithms come from. Usually what happens is you find very clever theoretical scientists, you explain your problem, and they think really hard about it, Blundell said. Then the experts go away and map the problem onto a more abstract version that drives an algorithm.The experts then present their algorithm for this class of problems, which they promise will execute in a specified amount of time and provide the right answer. However, because the mapping from the real-world problem to the abstract space on which the algorithm is derived isnt always exact, Blundell said, it requires a bit of an inductive leap.

With machine learning, its the opposite, as ML just looks at the data. It doesnt really map onto some abstract space, but it does solve the problem based on what you tell it.

What Blundell and Velikovi are trying to do is get somewhere in between those two extremes, where you have something thats a bit more structured but still fits the data, and doesnt necessarily require a human in the loop. That way you dont need to think so hard as a computer scientist. This approach is valuable because often real-world problems are not exactly mapped onto the problems that we have algorithms for and even for the things we do have algorithms for, we have to abstract problems. Another challenge is how to come up with new algorithms that significantly outperform existing algorithms that have the same sort of guarantees.

When humans sit down to write a program, its very easy to get something thats really slow for example, that has exponential execution time, Blundell noted. Neural networks are the opposite. As he put it, theyre extremely lazy, which is a very desirable property for coming up with new algorithms.

There are people who have looked at networks that can adapt their demands and computation time. In deep learning, how one designs the network architecture has a huge impact on how well it works. Theres a strong connection between how much processing you do and how much computation time is spent and what kind of architecture you come up with theyre intimately linked, Blundell said.

Velikovi noted that one thing people sometimes do when solving natural problems with algorithms is try to push them into a framework theyve come up with that is nice and abstract. As a result, they may make the problem more complex than it needs to be.

The traveling [salesperson], for example, is an NP complete problem, and we dont know of any polynomial time algorithm for it. However, there exists a prediction thats 100% correct for the traveling [salesperson], for all the towns in Sweden, all the towns in Germany, all the towns in the USA. And thats because geographically occurring data actually has nicer properties than any possible graph you could feed into traveling [salesperson], Velikovi said.

Before delving into NAR specifics, we felt a naive question was in order: Why deep learning? Why go for a generalization framework specifically applied to deep learning algorithms and not just any machine learning algorithm?

The DeepMind duo wants to design solutions that operate over the true raw complexity of the real world. So far, the best solution for processing large amounts of naturally occurring data at scale is deep neural networks, Velikovi emphasized.

Blundell noted that neural networks have much richer representations of the data than classical algorithms do. Even inside a large model class thats very rich and complicated, we find that we need to push the boundaries even further than that to be able to execute algorithms reliably. Its a sort of empirical science that were looking at. And I just dont think that as you get richer and richer decision trees, they can start to do some of this process, he said.

Blundell then elaborated on the limits of decision trees.

We know that decision trees are basically a trick: If this, then that. Whats missing from that is recursion, or iteration, the ability to loop over things multiple times. In neural networks, for a long time people have understood that theres a relationship between iteration, recursion, and the current neural networks. In graph neural networks, the same sort of processing arises again; the message passing you see there is again something very natural, he said.

Ultimately, Blundell is excited about the potential to go further.

If you think about object-oriented programming, where you send messages between classes of objects, you can see its exactly analogous, and you can build very complicated interaction diagrams and those can then be mapped into graph neural networks. So its from the internal structure that you get a richness that seems might be powerful enough to learn algorithms you wouldnt necessarily get with more traditional machine learning methods, Blundell explained.

Go here to read the rest:
DeepMind aims to marry deep learning and classic algorithms - VentureBeat

Samsung has its own AI-designed chip.Soon, others too – Texasnewstoday.com

Samsung is using Artificial intelligence that automates the most complex and subtle processes of designing state-of-the-art computer chips.

The Korean giant was one of the first chip makers to create chips using AI. Samsung is using AI capabilities in the new software from Synopsys, a leading chip design software company used by many companies. Shown here is the first of a real commercial processor design using AI, said Aart de Geus, Chairman and Co-CEO of Synopsys.

Other companies, including Google and Nvidia, talked about designing chips using AI. However, Synopsys tool, called DSO.ai, has the potential to be the most extensive as Synopsys works with dozens of companies. According to industry watchers, this tool has the potential to accelerate semiconductor development and unleash new chip designs.

Synopsys has another valuable asset for creating AI-designed chips. Its a long-standing, state-of-the-art semiconductor design that can be used to train AI algorithms.

A Samsung spokeswoman has confirmed that the company is using Synopsys AI software to design Exynos chips for use in smartphones, including their own branded phones and other gadgets. Earlier this week, Samsung announced a foldable device called the Galaxy Z Fold 3, the latest smartphone. The company hasnt confirmed if AI-designed chips are still in production, or in which products they might appear.

Throughout the industry, AI seems to be changing the way chips are manufactured.

A Google research paper published in June describes using AI to deploy components on the Tensor chip that are used to train and run AI programs in the data center. Googles next smartphone, the Pixel 6, will feature a custom Samsung chip. A Google spokeswoman didnt say whether AI helped design the smartphone chip.

AI addresses these very complex issues.

Linley Group, Senior Analyst, Mike Demler

Chip makers such as Nvidia and IBM are also working on AI-led chip design. Other manufacturers of chip design software, including Synopsys competitor Cadence, are also developing AI tools to help map new chip blueprints.

Mike Demler, senior analyst at Linley Group, who tracks chip design software, says artificial intelligence is well suited for placing billions of transistors across a chip. It helps with these problems that have become very complicated, he says. This will be a standard part of the calculation toolkit.

Using AI tends to be expensive, says Demler, because training powerful algorithms requires a lot of cloud computing power. But he hopes that as computing costs go down and models become more efficient, they will become more accessible. He adds that many tasks related to chip design cannot be automated, so professional designers are still needed.

Modern microprocessors are extremely complex and have multiple components that need to be effectively combined. Sketching a new chip design usually requires weeks of hard work and decades of experience. The best chip designers instinctively understand how different decisions affect each step in the design process. You cant easily write that understanding into your computer code, but you can use machine learning to acquire some of the same skills.

The AI approach used by Synopsys, Google, Nvidia, and IBM uses a machine learning technique called reinforcement learning to design the chip. Reinforcement learning involves training algorithms that perform tasks through rewards or punishments, and has proven to be an effective way to capture the subtle and difficult-to-systematize judgments of humans.

This method allows you to automatically create design basics, such as component placement and component routing, by experimenting with different designs in simulation and learning which ones give the best results. This speeds up the chip design process and allows engineers to experiment with new designs more efficiently. Synopsys said in a June blog post that one of the North American integrated circuit manufacturers used the software to improve chip performance by 15%.

Most notably, Reinforcement Learning was used by Googles subsidiary DeepMind in 2016 to develop AlphaGo, a program that allows you to master board game Go and defeat world-class Go players.

Here is the original post:
Samsung has its own AI-designed chip.Soon, others too - Texasnewstoday.com

Australia’s intelligent approach to artificial intelligence inventions – Lexology

An Australian court has provided a clear signal that inventions derived from machine learning activities can be subject to valid patent applications, provided they satisfy the regular indicia of inventiveness and novelty, whilst lacking a human inventor.

In Thaler v. Commissioner of Patents [2021] FCA 879, Justice Beach adopted an expansive view of the Patents Act, to hold that the concept of inventor can include, within its ambit, the notion of a suitably programmed computational device.

It is evident from the design of advanced AI systems such as Alphafold and AlphaGo, that the frontier of machine learning systems is in a continued state of rapid evolution. Whilst his honour spent significant portions of the judgement attempting to define the evolving concept of Artificial Intelligence, he was clear in holding that the innovative product of such systems can be subject to protection, whilst simultaneously lacking a human inventor.

In a distinct recognition of the importance of such advances to a society, his honour noted at [56]:

Now I have just dealt with one field of scientific inquiry of interest to patent lawyers. But the examples can be multiplied. But what this all indicates is that no narrow view should be taken as to the concept of inventor. And to do so would inhibit innovation not just in the field of computer science but all other scientific fields which may benefit from the output of an artificial intelligence system.

The contrast between this liberal interpretation of our Patents Acts application to machine learning inventions, as compared with the courts lack of clarity in the general field of software type of inventions is quite stark. However, the decision provides clear directions to the Australian Patent Office that AI advances should be readily patentable.

Read more:
Australia's intelligent approach to artificial intelligence inventions - Lexology

How Will AI Transform The Financial Sector And Its Jobs? – Youth Ki Awaaz

Wondering what to write about?Here are some topics to get you started

Share your details to download the report.

We promise not to spam or send irrelevant information.

Share your details to download the report.

We promise not to spam or send irrelevant information.

An ambassador and trained facilitator under Eco Femme (a social enterprise working towards menstrual health in south India), Sanjina is also an active member of the MHM Collective- India and Menstrual Health Alliance- India. She has conducted Menstrual Health sessions in multiple government schools adopted by Rotary District 3240 as part of their WinS project in rural Bengal. She has also delivered training of trainers on SRHR, gender, sexuality and Menstruation for Tomorrows Foundation, Vikramshila Education Resource Society, Nirdhan trust and Micro Finance, Tollygunj Women In Need, Paint It Red in Kolkata.

Now as an MH Fellow with YKA, shes expanding her impressive scope of work further by launching a campaign to facilitate the process of ensuring better menstrual health and SRH services for women residing in correctional homes in West Bengal. The campaign will entail an independent study to take stalk of the present conditions of MHM in correctional homes across the state and use its findings to build public support and political will to take the necessary action.

Saurabh has been associated with YKA as a user and has consistently been writing on the issue MHM and its intersectionality with other issues in the society. Now as an MHM Fellow with YKA, hes launched the Right to Period campaign, which aims to ensure proper execution of MHM guidelines in Delhis schools.

The long-term aim of the campaign is to develop an open culture where menstruation is not treated as a taboo. The campaign also seeks to hold the schools accountable for their responsibilities as an important component in the implementation of MHM policies by making adequate sanitation infrastructure and knowledge of MHM available in school premises.

Read more about his campaign.

Harshita is a psychologist and works to support people with mental health issues, particularly adolescents who are survivors of violence. Associated with the Azadi Foundation in UP, Harshita became an MHM Fellow with YKA, with the aim of promoting better menstrual health.

Her campaign #MeriMarzi aims to promote menstrual health and wellness, hygiene and facilities for female sex workers in UP. She says, Knowledge about natural body processes is a very basic human right. And for individuals whose occupation is providing sexual services, it becomes even more important.

Meri Marzi aims to ensure sensitised, non-discriminatory health workers for the needs of female sex workers in the Suraksha Clinics under the UPSACS (Uttar Pradesh State AIDS Control Society) program by creating more dialogues and garnering public support for the cause of sex workers menstrual rights. The campaign will also ensure interventions with sex workers to clear misconceptions around overall hygiene management to ensure that results flow both ways.

Read more about her campaign.

MH Fellow Sabna comes with significant experience working with a range of development issues. A co-founder of Project Sakhi Saheli, which aims to combat period poverty and break menstrual taboos, Sabna has, in the past, worked on the issue of menstruation in urban slums of Delhi with women and adolescent girls. She and her team also released MenstraBook, with menstrastories and organised Menstra Tlk in the Delhi School of Social Work to create more conversations on menstruation.

With YKA MHM Fellow Vineet, Sabna launched Menstratalk, a campaign that aims to put an end to period poverty and smash menstrual taboos in society. As a start, the campaign aims to begin conversations on menstrual health with five hundred adolescents and youth in Delhi through offline platforms, and through this community mobilise support to create Period Friendly Institutions out of educational institutes in the city.

Read more about her campaign.

A student from Delhi School of Social work, Vineet is a part of Project Sakhi Saheli, an initiative by the students of Delhi school of Social Work to create awareness on Menstrual Health and combat Period Poverty. Along with MHM Action Fellow Sabna, Vineet launched Menstratalk, a campaign that aims to put an end to period poverty and smash menstrual taboos in society.

As a start, the campaign aims to begin conversations on menstrual health with five hundred adolescents and youth in Delhi through offline platforms, and through this community mobilise support to create Period Friendly Institutions out of educational institutes in the city.

Find out more about the campaign here.

A native of Bhagalpur district Bihar, Shalini Jha believes in equal rights for all genders and wants to work for a gender-equal and just society. In the past shes had a year-long association as a community leader with Haiyya: Organise for Actions Health Over Stigma campaign. Shes pursuing a Masters in Literature with Ambedkar University, Delhi and as an MHM Fellow with YKA, recently launched Project (Alharh).

She says, Bihar is ranked the lowest in Indias SDG Index 2019 for India. Hygienic and comfortable menstruation is a basic human right and sustainable development cannot be ensured if menstruators are deprived of their basic rights. Project (Alharh) aims to create a robust sensitised community in Bhagalpur to collectively spread awareness, break the taboo, debunk myths and initiate fearless conversations around menstruation. The campaign aims to reach at least 6000 adolescent girls from government and private schools in Baghalpur district in 2020.

Read more about the campaign here.

A psychologist and co-founder of a mental health NGO called Customize Cognition, Ritika forayed into the space of menstrual health and hygiene, sexual and reproductive healthcare and rights and gender equality as an MHM Fellow with YKA. She says, The experience of working on MHM/SRHR and gender equality has been an enriching and eye-opening experience. I have learned whats beneath the surface of the issue, be it awareness, lack of resources or disregard for trans men, who also menstruate.

The Transmen-ses campaign aims to tackle the issue of silence and disregard for trans mens menstruation needs, by mobilising gender sensitive health professionals and gender neutral restrooms in Lucknow.

Read more about the campaign here.

A Computer Science engineer by education, Nitisha started her career in the corporate sector, before realising she wanted to work in the development and social justice space. Since then, she has worked with Teach For India and Care India and is from the founding batch of Indian School of Development Management (ISDM), a one of its kind organisation creating leaders for the development sector through its experiential learning post graduate program.

As a Youth Ki Awaaz Menstrual Health Fellow, Nitisha has started Lets Talk Period, a campaign to mobilise young people to switch to sustainable period products. She says, 80 lakh women in Delhi use non-biodegradable sanitary products, generate 3000 tonnes of menstrual waste, that takes 500-800 years to decompose; which in turn contributes to the health issues of all menstruators, increased burden of waste management on the city and harmful living environment for all citizens.

Lets Talk Period aims to change this by

Find out more about her campaign here.

Share your details to download the report.

We promise not to spam or send irrelevant information.

A former Assistant Secretary with the Ministry of Women and Child Development in West Bengal for three months, Lakshmi Bhavya has been championing the cause of menstrual hygiene in her district. By associating herself with the Lalana Campaign, a holistic menstrual hygiene awareness campaign which is conducted by the Anahat NGO, Lakshmi has been slowly breaking taboos when it comes to periods and menstrual hygiene.

A Gender Rights Activist working with the tribal and marginalized communities in india, Srilekha is a PhD scholar working on understanding body and sexuality among tribal girls, to fill the gaps in research around indigenous women and their stories. Srilekha has worked extensively at the grassroots level with community based organisations, through several advocacy initiatives around Gender, Mental Health, Menstrual Hygiene and Sexual and Reproductive Health Rights (SRHR) for the indigenous in Jharkhand, over the last 6 years.

Srilekha has also contributed to sustainable livelihood projects and legal aid programs for survivors of sex trafficking. She has been conducting research based programs on maternal health, mental health, gender based violence, sex and sexuality. Her interest lies in conducting workshops for young people on life skills, feminism, gender and sexuality, trauma, resilience and interpersonal relationships.

A Guwahati-based college student pursuing her Masters in Tata Institute of Social Sciences, Bidisha started the #BleedwithDignity campaign on the technology platform Change.org, demanding that the Government of Assam installbiodegradable sanitary pad vending machines in all government schools across the state. Her petition on Change.org has already gathered support from over 90000 people and continues to grow.

Bidisha was selected in Change.orgs flagship program She Creates Change having run successful online advocacycampaigns, which were widely recognised. Through the #BleedwithDignity campaign; she organised and celebrated World Menstrual Hygiene Day, 2019 in Guwahati, Assam by hosting a wall mural by collaborating with local organisations. The initiative was widely covered by national and local media, and the mural was later inaugurated by the events chief guest Commissioner of Guwahati Municipal Corporation (GMC) Debeswar Malakar, IAS.

Sign up for the Youth Ki Awaaz Prime Ministerial Brief below

See the article here:
How Will AI Transform The Financial Sector And Its Jobs? - Youth Ki Awaaz

Grafton Guineas beckon after red-hot start to July Carnival – Fox Sports

While Grafton wont be represented in the Ramornie Handicap, local hopes will be pinned on father and daughter duo, trainer Greg and jockey Leah Kilner, in todays next biggest race, the $80,000 Tursa Grafton Guineas.

Stable star Swanston has won four races and placed four times from 14 starts heading into the 1600m three-year-old feature. The gelding gained direct entry by winning the Grafton Guineas Prelude (1240m) on Westlawn Day (June 27), and will carry 55.5kg with Leah once again in the saddle.

The son of Smart Missile paid healthy odds of $21 when he kicked ahead in the straight and held off higher-fancied rivals Oakfield Arrow ($2.80fav) and Alpha Go ($8) to the line.

I thought Swanston was well over the odds, Greg said of the prelude win. Ridiculous odds on form, ran a place to a good horse (Adelaides Diamond) on the Gold Coast (May 12) on a very heavy track (heavy 10).

After the recent square off, the Ladbrokes bookmakers have put Swanston ($10) on par with Oakfield Arrow ($9.50), behind Les Kelly-trained favourite Tamilaide ($3.40), on the eve of the Grafton Guineas. However, the wider draw out of barrier ten has connections concerned.

I give him a really good each way chance, for sure, Leah said of Swanston from the familys stables at Cuban Song Lodge yesterday.

He loves Grafton, he has a really good record here, and he can run a mile.

I hope the track stays a little bit wet for him, because he likes the sting out of the ground.

Hes just drawn a bit awkward, so I dont know what were going to get. Hes a horse that does like the fence. But he tries his heart out, so hell be thereabouts.

Swanstons win completed a double for the Kilners on the opening day of the carnival, after Volfoni ($14) stormed home on the outside from the rear of the field in the Westlawn Insurance Brokers Class One Handicap (1215m).

Volfoni has also drawn wide (15) in todays last race at Clarence River Jockey Club, the Grafton Taxis Benchmark 58 Handicap (1400m).

Hes drawn extremely wide and hell have to go back, Lead said.

Hell probably be last again like he was the other day. Just let him work into it and let him find the line again.

The track has been playing a little bit leader-ish. Hopefully on the main days Im sure the rail will be back in the true and everything will get the chance.

He will be second up, hopefully he doesnt suffer from second up syndrome, but I think hell be right.

Hes a horse that really loves racing. Hes probably not blessed with the most of ability, but he just puts in 110 per cent every time he goes round. Hes a bit of a stable favourite, he just does whatever you want him to do and just tries every time to put him around.

The wins to Swanston and Volfoni almost doubled the stables 2020-21 season tally, with just three - including two to Swanston - prior to Westlawn Day.

Meanwhile, Leah is tied with Matthew McGuren as virtual leaders in the race for the Jockey of the Carnival, with a pair of second placings on Scilago in the Grafton Cup Prelude and Hit The Target in the South Grafton Cup to go with her two wins.

Its always really good to get a winner in carnival time, because everyones watching and youre a little bit more pumped up, she said.

But at the end of the day it is work, and youre trying to win every race you can.

She will ride Scilago again in the CPSU NSW Wage Growth Rural Plate Class 6 (2200m) today for Gold Coast trainer Leon Elliott.

He run really well when second in the Cup Prelude, Leah said.

They decided not to go to the cup and put him in the 2200 on Ramornie Day. Hes only a little horse but he tries really hard and its not a very strong race, so I give him a really good chance.

Ive got three really nice rides tomorrow.

The Grafton track was rated a soft six on the eve of Ramornie Day.

The first race of the day is the Winning Edge Presentations 4YO&Up Class 2 Handicap (1600m) at 12.29pm, the Guineas run at 3.24pm and the Ramornie at 3.59pm and the last at 4.34pm.

Read more:
Grafton Guineas beckon after red-hot start to July Carnival - Fox Sports