Archive for the ‘Free Software’ Category

How Quake Shook the World: Quake Turns 25 – How-To Geek

id Software

After revolutionizing PC gaming with Wolfenstein 3D and Doom, id Software pulled off a hat trick with Quake, released on June 22, 1996. Quake mixed polygonal 3D graphics, networking, and grunge into a groundbreaking hit with wide influence. Heres what made it special.

In Quake, you play as an unnamed protagonist (later called Ranger) who must travel through dimensional gates to defeat an alien named Quake that has invaded Earth. Like Doom before it, Quake is a first-person shooter game where you explore levels, solve minor puzzles, and obliterate every monster that you seeideally in a shower of bloody gibs.

Compared to the technicolor violence of Dooms universe, Quake felt relatively bland and dark, graphically speaking, and its single-player campaign was no showstopper. But it incorporated dark medieval imagery and H.P. Lovecraft influences that felt appropriate for 1996s era of grunge music, grunge fonts, and grunge fashion. And its pioneering 3D graphics and networking support placed it head and shoulders above the competition.

id Software forged Quake in a contentious team effort due to a long period of technical development on the engine, with disagreements over game design eventually leading to id Software co-founder John Romero leaving the company.

Still, the Quake team pulled off a major win for id Software. John Carmack, Michael Abrash, John Cash, Romero, and Dave Taylor handled the programming. John Romero, Sandy Petersen, American McGee, and Tim Willits designed the levels, and Adrian Carmack and Kevin Cloud created the graphics. Romero did game design, production, created editor tools, and worked on sound effects as well.

Notably, Quake features a creepy and compelling ambient soundtrack composed by Trent Reznor of the industrial rock band Nine Inch Nails. Reznor also voiced the sound effects of the main character. In tribute to Reznors services, which he reportedly provided for free, ids artists put a NIN Nine Inch Nails logo on the nail gun ammo box in the game.

Since Wolfenstein 3D, state-of-the-art first-person shooter games on PC generally used 2.5D graphics techniques to simulate height and depth while usually restricting player movement to a two-dimensional plane. Quake broke the mold by introducing a fully 3D polygonal universe populated with 3D objects and monsters, giving players six degrees of freedom in an immersive virtual world. Unlike 1993s Doom, you could look around freely (and even jump) in Quake.

It was an obvious progression from Wolfensteins three degrees of freedom to Dooms four degrees of freedom, to Quakes six degrees of freedom, Quake programmer John Carmack told How-To Geek. Some Doom clones experimented with a shearing look up/down for five degrees of freedom, but if you are going to support arbitrarily oriented polygons, you might as well get all six.

Polygonal 3D video game worlds existed at least a decade before Quake, and even a polygonal 3D first-person shooter called Descent predated it in 1995. But in mid-1996, fully 3D video games were still rare, and action 3D games with in-game physics that could run with a decent frame rate on a consumer PC (without any 3D graphical acceleration) were unheard of.

Nintendos Super Mario 64released in Japan just one day after Quakeprovided a fluid 3D world on the Nintendo 64 by utilizing special 3D graphics hardware. Both games revolutionized 3D polygonal gameplay in their respective genres, but Quake did it on your familys 75 MHz Pentium PC. Like Commander Keen, Wolfenstein, and Doom before it, Quake made a regular PC do more than what anyone thought was possible at the time.

In addition to its immersive 3D world, Quake played with dynamic light and shadow in a way never before seen in a computer game. The lighting model was a more unique feature than the polygons, says Carmack. There were other 3D polygon games and applications, but the light mapping and surface caching gave Quake a very different atmosphere.

By default, Quake usually ran at 320200 resolution on an average PC in 1996. Higher resolutions were possible, but they required much faster CPUs. With the 1997 release of GLQuakean official version of Quake that supported the OpenGL 3D graphics APIgamers could buy and use new 3D graphics accelerator cards to run Quake at higher frame rates and resolutions, effectively launching the GPU card era in gaming that we still have today.

RELATED: 30 Years of Vorticons: How Commander Keen Changed PC Gaming

Like Doom before it, Quake pushed the state-of-the-art when it came to multiplayer networked gaming. Doom introduced the FPS deathmatch to the world, but only through modem-to-modem connections, serial links, and local area networks directly. Quake was one of the first mainstream games to incorporate TCP/IP networking directly into the game itself, allowing people to type in an IP address and connect directly with a friend over the internet for co-op or competitive deathmatch play. Just months after Quakes launch, id Software introduced a QuakeWorld client that made internet multiplayer an even better experience.

Quake was also highly moddable, meaning that people who bought the game were allowed (and even encouraged) to build extensions off of it, make their own maps, and even extend the game engine in ways that the designers never thought possible. In this vein, id Software published its own programming language called QuakeC (used to develop Quake itself) that unlocked the engine for modders in a powerful way. After that, novel and influential variations of Quakes multiplayer modes emerged, such as capture the flag and Team Fortress.

As a result of its highly moddable nature, Quake also inspired some of the earliest forms of video game machinima, where people would use the Quake engine as a staging area to tell a story that would be recorded as a video and then (usually) shared on the internet.

RELATED: How to Play Classic "Doom" in Widescreen on Your PC or Mac

As a pioneering fully-3D FPS with in-game physics, Quake inspired emergent gameplay techniquessuch as strafe jumping, bunny hopping, and rocket jumpingthat required a high degree of skill to master. The competitive edge that these techniques gave over other players inspired Quakes use in video game tournaments, which many now consider a key step in the dawn of competitive eSports.

Quake also had a big impact on how we control PC games. Initially, many people played Quake with a keyboard like Doom. But the extra dimension of looking up and down gave players who adapted to mouse control a distinct advantage. The now-common WASD plus mouselook controls common in PC games became popular largely due to the play style of Dennis Thresh Fong, who won Quake tournaments in the 1990s. They werent the default control mapping until Quake III in 1999, but players could easily remap Quakes controls to suit their tastes. Once they tried using WASD plus the mouse, few competitive players went back to keyboard-only gameplay.

To our knowledge, Quake was the first action video game with a built-in console interface used for changing game options and manipulating the engine itself. At any time during play on the PC version of the game, players could press the tilde (~) key, and a console box would drop down from the top of the screen with a prompt. In that box, you could type text commands that could move the player, manipulate the game world, change options, or enable cheats.

For example, by bringing up the console and typing sv_gravity 100, players could reduce the effects of gravity in the game engine and jump much higher. This was powerful stuff in 1996. Its just one more way in which Quake was almost more than just a gameit was a 3D-gaming platform unto itself.

Despite the struggles during its development, Quake was a hit out of the gate. Romero himself uploaded the shareware version of Quake to the internet for release on June 22, 1996. Word rapidly spread, and Quake sold several hundred thousand copies within a year of its release, reaching 550,000 copies by 1999, withreportedly over 1.8 million sold by 2010.

At least 10 games licensed the Quake engine, and, of course, the game spawned sequels in Quake II, Quake III, Quake 4, Enemy Territory: Quake Wars, and more. Through its cultural influence, Quake feels at least on par with games like Tetris and Super Mario Bros. as having cemented a genre and inspired future developers to take the concept much further than originally imagined.

As for id Software, Quake served as the swan song for the duo of Carmack and Romero, whose partnership spawned a wave of early 1990s PC hits that are still legendary. By the time id released Quake, Romero already knew that hed be leaving the companyhe wanted to do more than just first-person shooters. The disagreement over whether we spend time exploring game design alternatives to FPS vs. just making another shooter is the reason why I decided to leave, Romero told us.

Still, Romero is proud of what Quake became. The single-player game was pretty scary with the lights low and volume up, he says. Its stood the test of time. He hosts Quake DeathJam events every year in Ireland, he notes, and the game is as popular as ever with competitive Quake players.

Likewise, John Carmack is also proud ofQuake, but he sees how its creation might have progressed differently. I am certainly satisfied with having made such an iconic game, he told How-To Geek. With hindsight, I sometimes think we could have done better by doing all the modding and networking technologies with something closer to Dooms rendering engine so it would run faster for more people and be easier to map for, then do the full 3D maps and characters in another game a year later, but who knowsthat might have been an opening for another company to pass us by.

But they did what they did, and no one passed them by that summer of 1996. Instead, id Software set the pace for the industry with Quake, and were still talking about it 25 years later.

Theres much debate online about the best way to play Quake today. Usually, the answer depends on whether you want a more authentic experience or one that utilizes advances in graphics technology since 1996.

If you dont want any fuss and have a Windows machine, browse over to Steam or GOG and buy Quake for about $5. Youll get the vanilla game with 1996-era optionsno widescreen or gamepad support, for example.

But there are other ways to go. After id Software released the source code to the Quake engine in 1999, dedicated fans have created newer versions of the game engine called source ports that allow high-resolution textures, widescreen resolutions, modern controller support, and much more.

Among the many Quake ports available, QuakeSpasm (which runs on Windows, Macs, and Linux) remains popular as a vanilla solution with widescreen and Xbox controller support, while DarkPlaces supports more modern lighting effects and textures.

Either way, youll still need to buy a copy of Quake online from Steam or GOG first (unless you have your vintage Quake CD), so that you can copy the data files over to the source port directory.

Have fun, and happy birthday,Quake!

RELATED: From Keen to Doom: id Software's Founders Talk 30 Years of Gaming History

Read the original:
How Quake Shook the World: Quake Turns 25 - How-To Geek

Google Sheets is the best free spreadsheet for anyone trying to budgethere’s how to get started – CNBC

Selects editorial team works independently to review financial products and write articles we think our readers will find useful. We may receive a commission when you click on links for products from our affiliate partners.

If you've never built a budget before, the thought of tracking all of your expenses and sources of income can seem like an immense task. While the act of tracking your everyday expenses might seem tedious and hard to keep up with, there are a number of apps that make it easier for people to understand where they should save or spend.

With so many different apps on the market, it can be hard to know which one is the best fit for you. Select compared over a dozen options when rating the best budgeting apps andbest expense tracker apps, and we found that the most popular ones have nearly 5 out of 5-star ratings and thousands of customer reviews.

We ranked Google Sheets as the best free spreadsheet for anyone trying to budget. Google Sheets is completely free, offers a wide variety of different templates and can be connected to third-party software to automatically import your bank transactions.

Below, we review Google Sheets to give you all the details on its features, including the tools, perks, safety, pricing, availability and ratings so you can decide if it is the right choice for managing your money.

Information about Google Sheets has been collected independently by Select and has not been reviewed or provided by Google Sheets prior to publication.

Gmail account users can access a variety of free budgeting templates to help get started

Users manually input their expenses, but some budgeting templates offer preset categories

No, but some templates offer third-party add-on software that automatically pulls financial transactions into Google Sheets

Offered on web browser and smartphone devices

Googleserver protection: Unless you've shared your Google Sheet with someone, nobody can access your files without your Gmail account username and password. For this, Google offers two-factor authentication and physical security keys like YubiKey

Google Sheets requires users to manually input all of their expenses and income. While this may be time-consuming and tedious, doing so can help you understand exactly how you're spending your money each month.

While Google Sheets can seem like a hassle because you have to manually enter information, users can benefit from a wide variety of free templates that can make tracking your spending habits much easier. Some templates even have preset categories.

There are numerous third-party budget templates you can find online with a quick search, but Google provides two options in its template section: an annual budget and a monthly budget. Both templates are comprehensive, allowing you to itemize all of your expenses and income flows into various preset categories and subcategories that you can change based on your needs. The annual budget template also provides a month-by-month summary of your income, expenses and net savings.

For users who want to simplify the budgeting process, some Sheets templates are connected with third-party software that allow you to import your financial transaction data.

Google Sheets is secure for tracking your finances because it works through Googleserver protection: Unless you've shared your Google Sheet with someone, nobody can access your files without your Gmail account username and password. If you're interested in making your Google account more secure, the company offers two-factor authentication and physical security keys like YubiKey.

Google Sheets is a completely free way to start budgeting all you need is a Gmail account.

Google Sheets is offered on web browser and smartphone devices. The Google Sheets app is available in the App Store (for iOS) and on Google Play (for Android).

Rating in App Store at time of writing: 4.8 out of 5 (over 900,000 ratings)

Rating on Google Play at time of writing: 4.0 out of 5 (over 100,000ratings)

If you're new to budgeting and like the idea of manually entering your expenses and income into a spreadsheet, Google Sheets is the best free option out there.

Users have access to different template options, third-party add-on software for certain templates and the ease of accessing their spreadsheet both on their computer and their phone.

For those who want a budgeting app that easily syncs to their bank account and credit card and automatically categorizes their expenses, consider Mint. The budgeting app is also completely free and comes with a few more bells and whistles, including alerting you when you're over budget, if you've incurred ATM fees or have payments due.

To determine which free budgeting tools offer the best user experience,Selectanalyzed over a dozen different budgeting tools and looked at their features and user reviews.

We narrowed down our ranking by looking for at least one pick in each of the following categories: spreadsheets, desktop software and smartphone apps. We wanted variety in the ways users can build their budget, and we ranked each tool by who it is best designed for (anyone, beginners, investors or small business owners).

The five tools we selected for this ranking are all free and easy to use. For our selection, we also favored budgeting tools that have strong user reviews.

Other factors we looked at included the security features, user customization and illustrative data insight, such as graphs and reports, that are available.

Editorial Note: Opinions, analyses, reviews or recommendations expressed in this article are those of the Select editorial staffs alone, and have not been reviewed, approved or otherwise endorsed by any third party.

Read the original:
Google Sheets is the best free spreadsheet for anyone trying to budgethere's how to get started - CNBC

Pittsburgh Resilient Coders is bringing its free and equity-minded coding bootcamp to Pittsburgh – Technical.ly Pittsburgh – Technical.ly

Resilient Coders, a free and equity-focused coding bootcamp, is coming to Pittsburgh.

Grounded by a mission to advance opportunities for people of color in the tech industry, the Boston-based nonprofit trains students for careers in software development to further community and economic justice for those employees of color.

We need to materially shift tech culture, founder and Executive Director David Delmar Sentes told Technical.ly. That must be led by Black and Latinx folks, and thats going to require alumni from Resilient Coders and alumni from programs like ours, having the opportunity to meet each other and essentially infiltrate the tech industry and cause the shift themselves.

Resilient Coders expanded to Philadelphia at the end of last year with a virtual pilot cohort of five students (along with 21 from Boston), who graduated at the end of May. All five of those Philly students reportedly ended the bootcamp with at least one job offer, and an average starting salary of $127,000.

Now, the 20-week program, which trains people of color without college degrees in software development, will launch a pilot cohort of 12 students in Pittsburgh with support from the Richard King Mellon Foundation.

Pittsburgh had not really been on my radar until they reached out, said Delmar. They essentially afforded us the opportunity to have an exploratory process, and so that meant that we were able to apply a certain degree of resources to just getting to know Pittsburgh, and its people. And, I mean, I was just blown away.

Delmar and his team not only met with potential hiring partners and tech leaders, but also with organizations like Community Forge, a community center offering business growth resources in Wilkinsburg, and Manchester Bidwell Corporation, an arts and career training center in Manchester. Resilient Coders partnered with similar institutions when they launched in Philadelphia, but Delmar noticed a difference in the smaller neighborhoods of Pittsburgh.

Something I think thats unique about Pittsburgh is that everybodys on board to come together for Pittsburgh, he said, remarking on the shared motivation he saw to leverage local talent in tech. What sticks out to me is the fact that Pittsburgh is a city in the midst of self-reinvention theres a certain degree of spirit of self-disruption that youre not necessarily going to find everywhere.

Resilient Coders 20-week bootcamp includes technical training in HTML, CSS, JavaScript, jQuery, React, Node and MongoDB. The program also focuses on soft skills training in communication, time management and hiring process preparation. Delmar and his team recruit students through open online hackathons hosted before the launch of a new cohort, assessing them more for their potential to stay committed to the program and to Resilient Coders long-term mission of equity in tech than for any hard coding skills.

Resilient Coders managing director of engineering, Leon Noel, with a Resilient Coders students. (Courtesy photo)

Unlike many other coding bootcamps, Resilient Coders offers this training at no cost to students, and even pays them a stipend throughout their participation to relieve some of the worry of any lost income during the 20 weeks. Individual, corporate and organizational philanthropy provide funding.

Delmar sees this as a necessity of any bootcamp or career training program, underscoring that learning new skills for career advancement shouldnt come with heavy debt.

I dont believe we can begin a conversation in this country around closing the racial wealth gap around ameliorating the stratification of wealth, until there is an option available to folks in which education is free, he said.

Similar to the Philadelphia cohort and others that have taken place during the pandemic, Resilient Coders Pittsburgh students will complete the bootcamp online. And Delmar plans to keep it that way.

A year and a half ago, I would have been one of those people shaking their fist at that had the saying that education must be in person, he said. And then we proved me wrong.

The success of the remote cohorts so far in terms of job offers and starting salaries made Delmar realize the bootcamp could likely continue online post-pandemic and still achieve its mission. And, he noted, collaborations with trusted local community groups have made operating on a remote basis easier.

Its important to us to work with the folks that are on the ground, whore already doing the the incredible work necessary to make sure that folks have the stability necessary to have the opportunity to uplift themselves, he said.

Recruitment events for the upcoming Pittsburgh cohort starting on July 12 have ended, but keep an eye out for future hackathons on Resilient Coders website and social media pages.

P.S. Check out Technical.lys roundup of eight other coding bootcamps and workshops offering web development training and more.

Excerpt from:
Pittsburgh Resilient Coders is bringing its free and equity-minded coding bootcamp to Pittsburgh - Technical.ly Pittsburgh - Technical.ly

Dependable Debian is like a rock in a swirling gyre of ‘move fast and break things’, and version 11 is no different – The Register

Hands on The Debian 11 is the venerable Debian Project's first new release in more than two years, nicknamed "Bullseye" after the Toy Story character and supplanting Debian 10 "Buster" (all Debian releases bear names from the kids' film).

Since Debian is the source from which dozens of other distros draw, notably Ubuntu, its major updates are well worth paying attention to, even if you aren't a Debian user. If you are a Debian user and you've been patiently waiting for an updated kernel to work with all the latest hardware, I have good news, the 5.10 LTS Linux kernel is here. More on that in a minute.

First, for the Linux newcomers, it might help to understand why Debian only releases new versions every couple of years when most popular distros crank out several new versions each year.

Debian is a huge, sprawling project and probably has more moving pieces than just about any other software project around. That's part of the reason Debian releases don't come along very often it takes a long time to get that many ducks in a row.

The other reason Debian is slow to change is that it's supposed to be rock solid and reliable. In my experience at least it is exactly that. I've been running Debian on servers for well over a decade now and have never had an update break something. Never. There isn't a single other piece of software I could say that about.

Still, Debian's infrequent updates are an oddity in a software world obsessed with "moving fast and breaking things" as some like to say. Debian has a reputation for being a good choice for servers, but a little boring and behind the times on the desktop. That hasn't been true for a while now thanks to Flatpaks and Snap packages, but it's a reputation that lingers.

For many users, myself included, Debian is like a rock in an ever-swirling sea of updates. I recently rejoined the league of Debian desktop users after a few years in the Arch camp. While I still think Arch is a fine distro, I have reached an age where I have better things to do than look after my laptop. I want to install a system and not think about it again for at least five years. This is the use case where Debian excels and Debian 11 is no exception. It's been solid and "just works" since the RC 1 release I installed earlier this year.

Debian 11 ships with the shiny new Linux 5.10 LTS kernel, which happened to arrive just in time to fit into this release cycle. The 5.10 kernel, which will also be part of the next version of Android, will be supported until 2026.

The 5.10 kernel is notable for some speed improvements for the ext4 file system (the default file system in the Debian installer), as well as significant improvements to some less-common chip architectures. Debian calls itself "the universal operating system" and indeed its chip architecture support is far broader than most distros. That makes it a popular base for developers of everything from embedded devices to auto-based systems.

It's also worth noting, given that just about everyone else has abandoned it, Debian will still support i386. At this point, if you have a 32-bit machine still around, Debian is likely your best option for getting a well-supported system.

Along with the improvements mentioned above come the usual slew of kernel updates and hardware drivers to support new devices, including kernel-level support for exFAT filesystems. You no longer need to install Fuse just to mount a Windows drive. If you're curious to know the finer details of everything new in this kernel, you can read through the changelog.

Debian's repositories are legendarily massive and they just keep growing. Bullseye boasts 13,370 new packages, which brings the total to over 57,703 packages. Well over half of all that software has been updated for this release, over 35,532 packages. That's obviously beyond the scope of what I could begin to cover here, beyond the scope of what anyone can cover anywhere really, which is why I say Debian's scope is staggering.

The default desktop

Among the highlights are some important changes to the most popular desktops and applications, as well as some significant updates to the server-side packages sysadmins rely on.

The desktop selection is larger with this release. From the installer you can choose Gnome (which serves as the default), Xfce, KDE, Mate, LXQt, and LXDE. If you opt for a minimal installation, you can then later install a number of other desktops, including my favourite, Sway. Most desktops are at or very near their latest release.

The same is true of major desktop apps like LibreOffice, which has been updated to version 7.0, and GIMP, which is now at 2.10.22. That said, Debian is not the distro to look to if you want the latest release from official packages. It might be pretty up to date for the release, but it won't stay that way. The solution is to use Flatpaks and Snap packages for the applications you want to keep polished.

Thanks to the fragmentation of apps across the Flatpak and Snap ecosystem, you'll probably want both installed. Fortunately both are dead simple to set up in Debian Bullseye and I had no trouble installing Flatpaks of Darktable, GIMP, Inkscape, Kdenlive, and other apps that I like to keep up to date.

While Flatpaks are nothing new, I think it's worth pointing out that they have largely delivered on that promise of providing up-to-date packages even on distros like Debian. Flatpaks (and Snaps) largely eliminate the old "it's so out of date" argument against using Debian on the desktop.

Another nice new feature for desktop users is support for driverless printing and scanning in CUPS and SANE. If you've got a relatively new printer or scanner, it should "just work" in this release. I don't have a scanner to test with, but Debian had no trouble connecting to my Brother printer and didn't require installing any drivers (which often are not free software).

The CUPS support works whether your printer is on the network or plugged in via USB, thanks to the new package ipp-usb, which uses the vendor-neutral IPP-over-USB protocol. This means your printer is treated as a network device, and driverless printing works even if it's actually connected via USB.

While the desktop improvements are welcome, the vast majority of Debian 11 installs will probably be on the server. There's plenty new for sysadmins in this release as well.

Debian's various development stacks have been updated, with most languages moving to their latest stable releases. Python is at 3.9, PHP 7.4, and the GNU compiler gets bumped to 10.2. Also notable, the PostgreSQL database has jumped from version 11 to version 13. Python 2.7, which reached end-of-life status some time ago is still around in some form for packages that haven't been upgraded but it isn't supported in any way.

Sysadmins will want to look over the release notes as there are a few potential gotchas among the new features in this release. Some changes to systemd especially are worth studying before you upgrade any mission critical servers. Debian developer Michael Prokop has a good overview of what's changed in Debian 11 from a sysadmin point of view.

As with any new Debian update quite a few things have been removed too. This time around some 7,278 packages have been dropped for one reason or another. Two that caught my eye were Chef, which has been dropped over trademark issues, and rsnapshot, a handy wrapper around rsync, which was removed because it's no longer maintained upstream.

I made the jump to Debian 11 several months ago when the first release candidate arrived and I've had no issues on the desktop. Debian 11 has been a stable, dependable system even pre-release. If you're already happy with your distro of choice, there may not be a lot to tempt you into Debian. Ubuntu, which is based off Debian's testing release, remains a friendlier distro for many since it makes installing any non-free software components much easier than Debian.

If you're already using Debian then you already know any new release is well worth upgrading to. The Debian Project has a good overview of how to upgrade from Debian 10 Buster to Debian 11 Bullseye, or, if you'd like to start fresh, you can always grab the live CD to do a clean installation.

Read more here:
Dependable Debian is like a rock in a swirling gyre of 'move fast and break things', and version 11 is no different - The Register

AI Weekly: The promise and limitations of machine programming tools – VentureBeat

Elevate your enterprise data technology and strategy at Transform 2021.

Machine programming, which automates the development and maintenance of software, is becoming supercharged by AI. During its Build developer conference in May, Microsoft detailed a new feature in Power Apps that taps OpenAIs GPT-3 language model to assist people in choosing formulas. Intels ControlFlag can autonomously detect errors in code. And Facebooks TransCoderconverts code from one programming language into another.

The applications of computer programming are vast in scope. And as computers become ubiquitous, the demand for quality code draws an ever-growing number of aspiring programmers to the profession. After years of study to become proficient at coding, experts learn to convert abstracts into concrete, executable programs. But they spend the majority of their work hours not programming. According to a study from the University of Cambridge, at least half of developers efforts are spent debugging, which costs the software industry an estimated $312 billion per year.

AI-powered code suggestion and review tools promise to cut development costs substantially while allowing coders to focus on more creative, less repetitive tasks, according to Justin Gottschlich, principal AI scientist at Intels machine programming division. Gottschlich is spearheading the work on ControlFlag, which fuses machine learning, formal methods, programming languages, and compilers to detect normal coding patterns, identifying abnormalities in code that are likely to cause a bug.

Prior to machine learning- or AI-based programming systems, programmers had dozens perhaps hundreds of tools to help them be more productive, produce code with fewer logic errors, improve the softwares performance, and so on. However, nearly all of these systems were rules-based,' Gottschlich told VentureBeat via email. While useful, rules-based systems are inherently limited in scope by the rules that they have been programmed into them. As such, if new kinds of things occur, the systems would need to be updated by humans. Moreover, these rules-based systems have always been prone to human error in creating the rules encoded in them. For example, programmers may accidentally create a rule to find a certain type of bug, but incorrectly define the rules to find it. This hidden bug in the rules system could go undetected forever.

Gottschlich asserts that AI-based systems offer benefits over the rules-based systems of yesteryear because AI can learn on its own in an unsupervised fashion, enabling it to draw on massive code databases. With unsupervised learning, an algorithm is fed unknown data for which no previously defined labels exist. The system must teach itself to classify the data by processing it to learn from its structure.

For example, ControlFlag was trained on over 1 billion unlabeled lines of code to identify stylistic variations in programming language. As for TransCoder, it learned to translate between C++, Java, and Python by analyzing a GitHub corpus containing over 2.8 million repositories. Microsoft trained a bug-spotting program on a dataset of 13 million work items and bugs from 47,000 developers across AzureDevOps and GitHub repositories. And code review platform DeepCodes algorithms were taught using billions of lines of code captured from public open source projects.

Theres a difference between AI-powered coding tools that can generate code from whole cloth versus augment a programmers workflow, of course. The latter is more common. Startups such as Tabine (formerly Codota) are developing platforms that suggest and autocomplete scripts in Python, C, HTML, Java, Scala, Kotlin, and JavaScript. Ponicode taps AI to check the accuracy of code. Intels Machine Inferred Code Similarity engine can determine when two pieces of code perform similar tasks, even when they use different structures and algorithms. And DeepCode offers a machine learning-powered system for whole-app code reviews as does Amazon.

Currently, we see a lot of AI-powered assistants, enabling software engineers to gain velocity and accuracy in their work. And the reason for the availability of more assistant tools than automation tools is that AI-powered automation has simply not yet reached the level of accuracy required, Ponicode CEO Patrick Joubert told VentureBeat. Our industry is still young, and even though we can already see the potential of automation with AI based code generators, we have to acknowledge that automatically generated code is still pretty unmaintainable and the overall quality is not meeting the right standards yet. While some engineers are working on the future of AI powered automation, my team and I, along with many other stakeholders, are dedicated to creating tools that can be used today. Within a few years I believe there will be enough tools to cover all steps of the development lifecycle.

For Joubert, the most intriguing categories of machine programming tools today are autocompletion and code analysis. Autocompletion systems like Tabnine and Kite employ AI to analyze semantics and make sense of code, autocompleting functions with a sense of the codes semantic content and purpose. As for code analysis tools like Snyk and DeepCode, theyre dedicated to finding vulnerabilities in the code and suggesting actions to resolve them often with surprising speed and precision.

When we see the numerous leaks and bugs from any software, including the ones built by leading multinationals, we can agree that [the software] industry has not yet matured. AI-powered coding tools are mostly meant to enhance the developer experience and empower them, thanks to greater velocity and greater efficiency, Joubert added. Behind these developer-focused benefits, I believe we are on the way to allowing software engineers to build industrial-grade software, where quality, innovation, and speed are reached systematically Autocompletion [in particular is] enabling software engineers to focus on the most complex part of their codebase and removing the burden of manually writing long strings of code.

Despite their potential, both AI-powered code generators and coding assistance tools have their limitations. For example, while GitHub has over 250 million code repositories alone, most of the data is unannotated. Theres only a few examples that describe precisely what the code does, posing a particular challenge for any system that cant learn from unlabeled data.

In an effort to address this, IBM recently released CodeNet, a 14-million-sample labeled dataset with 500 million lines of code written in 55 programming languages. The company claims that the rich annotations added to CodeNet make it suitable for a diverse set of tasks as opposed to other datasets specialized for specific programming tasks. Already, researchers at IBM have conducted several experiments with CodeNet, including code classification, code similarity evaluation, and code completion.

It is my speculation that in the next decade, code semantics understanding systems are likely to be one of the most important areas of machine programming in the coming decade, Joubert said. It depends on the domain the machine programming system is being applied to. For small programs, such as unit tests or regression tests, full program synthesizers are a reality today. Yet, for larger programs, its currently computationally intractable for machine programming systems to generate the potential thousands or millions of lines of code without the assistance of a programmer.

Boris Paskalev, the cofounder and CEO of DeepCode, calls creating a couple of lines of code with AI more of a toy than a productivity breakthrough. While techniques like natural language processing work well with text because theres fixed limits on the words and syntax that need to be understood, code isnt the same, he argues.

Since there are no formal rules for software development, [programming] is an art that requires a complete understanding of code and a developers intentions to produce something that works as expected without bugs, Paskalev told VentureBeat. As far as weve come in using machine learning and neural networks for code, were still only in the invention of the wheel phase machine learning is already proving to be very useful for code, but only after it goes through a semantic machine learning-representation of the code: making sure all semantic facts, variables, transitions, and logical interrelations are clearly represented and considered by the learning model.

To Paskalevs point, recent studies suggest that AI has a ways to go before it can reliably generate code. In June, a team of researchers at the University of California at Berkeley, Cornell, the University of Chicago, and the University of Illinois at Urbana-Champaign released APPS, a benchmark for code generation from natural language specifications. The team tested several types of models on APPS, including OpenAIs GPT-2, GPT-3, and an open source version of GPT-3 called GPT-Neo. In experiments, they discovered that the models could learn to generate code that solves easier problems but not without syntax errors. Approximately 59% of GPT-3s solutions for introductory problems had errors, while the best-performing model GPT-Neo attained only 10.15% accuracy.

When generating code from whole cloth, there are typically challenges around both specifying the intent and consuming the results, Tabine CEO Dror Weiss told VentureBeat. User intent can be specified in natural language by providing examples, writing code in a higher-level language, or in other means. But in most cases, this intent does not provide a full specification of the desired behavior. Also, the generated code may be following different route than what the developer had in mind. As such, it may be challenging for the developer to judge whether the code performs the desired operation exactly.

Facebook AI researchers Baptiste Rozire and Marie-Anne Lachaux, who worked on TransCoder, agree with Tabines assessment. It is inherently difficult to generate correct code from unspecific natural language problem descriptions that could correspond to several different code snippets. An easier task would be to generate code from an input that is more specific and closer to the output code, like pseudo-code or code written in a different language, they told VentureBeat. A huge obstacle to the adoption of methods generating large amounts of code without human supervision is that they would need to be extremely reliable to be used easily. Even a tool that could generate methods with 99% accuracy would fail to generate a working codebase of hundreds of functions. It could speedup the code generation process but would still require human testing and intervention.

Rozire and Lachaux also point out that tasks around code generation are generally much harder than classification tasks because the model has a lot of freedom and can create many different outputs, making it hard to control the correctness of the generation. Moreover, compared with natural languages, programming languages are very sensitive to small errors. A one-character difference can change the semantics of the code and make the output faulty.

Current machine learning algorithms may not be able to generalize well enough to different problems to match human performance for coding interviews without larger datasets or much better unsupervised pre-training methods, Rozire and Lachaux said.

Paskalev thinks itll be at least five to ten years until natural language processing enables developers to create meaningful components or even entire apps from a simple description. But Gottschlich is more optimistic. He notes that AI-powered coding tools arent just valuable in writing code, but also when it comes to lower-hanging fruit like upgrading existing code. Migrating an existing codebase to a modern or more efficient language like Java or C++, for example, requires expertise in both the source and target languages and its often costly. The Commonwealth Bank of Australia spent around $750 million over the course of five years to convert its platform from COBOL to Java.

Deep learning already enables us to cover the smaller tasks, the repetitive and redundant ones which clutter a software engineers routine. Today, AI can free software engineers from tedious tasks slowing them down and decreasing their creativity, Gottschlich said. The human mind remains far superior when it comes to creation, innovation, and designing the most complex parts of our softwares. Enabling them to increase velocity in these exciting, high added value parts of their work is, I believe, the most interesting way to leverage the power of machine learning today.

Joubert and Weiss say that the potential business value of machine programming also cant be ignored. An estimated 19% to 23% of software development projects fail, with that statistic holding steady for the past couple of decades. Standish Groupfound that challenged projects i.e., those that fail to meet scope, time, or budget expectations account for about 52% of software projects. Often, a lack of user involvement and clear requirements are to blame for missed benchmarks.

We see a great number of new tools using AI to enhance legacy code and help existing assets reach industrial-grade standards. We can elevate developer legacy code management workflows and be part of reducing the hefty level of technical debt built up over the past 50 years in the software industry, Joubert said. The days when developers had to write and read code line by line are gone. Im excited to see how the other steps in the software development lifecycle are going to be transformed and how tools will reach the same level that Kite or Snyk have attained. Leveraging AI to build efficient, one-purpose, tested, secure, and documented code effortlessly is going to profoundly change the way software companies can create incremental value and innovation.

From Weiss perspective, AI-powered coding tools can reduce costly interactions between developers like Q&A sessions and repetitive code review feedback while shortening the project onboarding process. [These] tools make all developers in the enterprise better. They take the collective code intelligence of the organization and make it available, during development time, to all developers. This allows any developer on the team to punch above their weight, he said.

For AI coverage, send news tips toKyle Wiggers and be sure to subscribe to the AI Weekly newsletterand bookmark our AI channel,The Machine.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

See the rest here:
AI Weekly: The promise and limitations of machine programming tools - VentureBeat