Wanna use your Nvidia GPU for acceleration but put off by CUDA? OpenAI has a Python-based alternative

In brief If you’ve always wanted to program your Nvidia GPU to accelerate machine learning, image processing, and other workloads, but find Nv’s CUDA too daunting or too much of a faff to learn, you’re in luck.

OpenAI late last month released Triton, a Python-based environment that tries to help developers write and compile code to run on your Nvidia GPU much more easily without having to grapple with CUDA.

The San Francisco upstart has been using Triton to optimize their software so that their machine-learning algorithms run more efficiently on specialized hardware. Building state-of-the-art models is costly, developers have to be able to train and tweak their performance quickly, which requires writing custom GPU kernels.

“We’re releasing Triton 1.0, an open-source Python-like programming language which enables researchers with no CUDA experience to write highly efficient GPU code—most of the time on par with what an expert would be able to produce,” OpenAI said. “Triton makes it possible to reach peak hardware performance with relatively little effort; for example, it can be used to write FP16 matrix multiplication kernels that match the performance of cuBLAS — something that many GPU programmers can’t do—in under 25 lines of code.”

You can read more about Triton and its documentation here. Support for other GPUs, such as AMD’s, is said to be coming.

Twitter has offered a bounty for anyone who can find biases in its algorithms, as seen with its image-cropping tool that favored White people and women.

Evidence of computer-detected gunshot withdrawn from trial

Prosecutors in America have withdrawn from a murder trial evidence of what was said to be a gunshot detected by classification algorithms.

One evening in May last year, Safarain Herring, 25, was shot in the head, and died two days later in hospital. Michael Williams, 64, was charged with his slaying, and denies any wrongdoing: he said Herring was killed by someone else in a drive-by shooting. Williams was said to have brought Herring to St Bernard Hospital in Chicago.

Cities in the United States have a system built by ShotSpotter dotted around their streets; this consists of microphones attached to computer systems programmed to identify the sound of gunfire and automatically alert the cops to the location.

One of the pieces of evidence against Williams claims ShotSpotter’s sensors in Chicago identified gunfire where surveillance cameras had seen Williams stop his car by a south-side Chicago block, right when and where the cops said Herring was shot.

However, Williams’ lawyer submitted paperwork [PDF] claiming ShotSpotter actually detected a firework a mile away from that location, and that ShotSpotter later reclassified the bang as a gunshot and the location as being where Williams was seen on camera, Vice first reported.

Williams’ lawyer demanded the court hold an inquiry into the ShotSpotter evidence, and the prosecutors simply withdrew it.

ShotSpotter responded by denying at length it improperly altered any data or evidence, and hit back at any suggestion it had done so to help the police make a case. It said its software generates real-time alerts automatically, and staff later analyze the microphone readings to submit forensic reports for the courts, and these final reports can therefore differ from the initial alerts.

“The idea that ShotSpotter ‘alters’ or ‘fabricates’ evidence in any way is an outrageous lie and would be a criminal offense,” it said in a statement. “We follow the facts and data for our forensic analysis. Period.”

Apple Watch data problematic for health study

Algorithms used to monitor things like heart rate and sleep patterns running on Apple Watches may not be useful in academic research.

JP Onnela, an associate professor of biostatistics at the public health school arm of Harvard University, discovered this the hard way when he asked his collaborator Hassan Dawood, a research fellow at Brigham and Women’s Hospital, to upload his heart rate data recorded by his Apple Watch.

To his surprise, when they exported data twice from the same samples taken over the same time period they discovered a big discrepancy between the recordings. The same heart rate reading exported once on September 5, 2020 and, again, on April 15, 2021 should be the same but they were different.

Onnela reckons Apple’s code could be to blame. “These algorithms are what we would call black boxes — they’re not transparent. So it’s impossible to know what’s in them,” he told The Verge. The lack of transparency means Apple may have tweaked its software, making it difficult for the researchers to trust the data collected by the iGiant’s devices.

Apple, however, said there wasn’t an issue with its algorithms and that the issue probably lies in the exportation process. Either way, the errors show that it’s probably not a trustworthy source of data that should be used for academic purposes.

You can read more about the experiment here. ®

Leave a Reply

Your email address will not be published. Required fields are marked *