You shouldn't stare at the sun. It's dangerous. But this month marks a full decade of NASA's Solar Dynamics Observatory doing the staring at the sun for us.
It's been studying the star closest to Earth non-stop from orbit, gathering 425 million high-resolution images since June 2010.
The data collected has helped scientists make many discoveries, but for those of us scrolling the internet for some kind of good news, NASA has turned the observations into something fun. NASA has compiled the images into a stunning time-lapse of the star's activity over the last 10 years.
The time-lapse assembles images taken at "an extreme ultraviolet wavelength that shows the sun's outermost atmospheric layer — the corona." The movie compiles a photo every hour, with dark slides "caused by the Earth or the moon eclipsing [the observatory] as they pass between the spacecraft and the sun."
The full video lasts 61 minutes, showcasing the sun's 11-year solar cycle with its rise and fall in activity. The shift in solar activity can be clearly seen as the number of swelling solar spots increase to explosive levels with violent whips of magnetic field lines and solar flares. It then calms down again to a period known as solar minimum when the amount of solar activity is relatively low. That's a period we're in now, with one result being that the northern lights are less frequently seen in lower latitudes.
It's stunning to see from our perspective even if a decade is little more than a blip in the life of a star.
The full timelapse video in 4K resolution can be viewed at cat5.tv/sun
Sent to us by: Robbie Ferguson
Tim Hortons is facing a class-action lawsuit in Quebec over data collection issues in the company’s mobile ordering app, filed a day after four privacy watchdogs announced a joint investigation into the company’s overreach.
The court application filed by two Montreal-based law firms on Tuesday, cites an investigative story by the Financial Post which revealed the Tim Hortons app was logging users’ location data in the background even when the app wasn’t open.
The app was streaming GPS location data to Radar Labs Inc., an American company which analyzes location data to infer where users live and work, and logs a person’s visits to one of Tim Hortons’ competitors, such as Starbucks or McDonald’s Corp.
Immediately after privacy commissioners for the federal government, Quebec, Alberta and British Columbia announced their joint investigation on Monday, Tim Hortons said in a statement that it has discontinued its practice of tracking users’ location when the app is not open.
The lead plaintiff is a Montreal resident who works in the IT sector. Even though defined by his lawyer as a "tech savvy guy," he was shocked to find out how the app was tracking him.
Consumer protection lawyer, Joey Zukran said that simply stopping the practice of background location tracking isn’t enough, because Tim Hortons’ parent company appears to have been tracking the lead plaintiff since last year, and the damage is already done.
Zukran says, "They’ve gained a valuable database of information and behaviour patterns and activities of individuals. So are they now just going to throw that out, or are they going to profit from it? My guess is the latter."
Tim Hortons’ chief corporate officer Duncan Fulton said in an emailed statement the company did not have any comment on the class-action lawsuit, and reiterated that it had discontinued background location tracking, although the app may still record user location when it’s open.
In many cases of privacy violations, it’s difficult to sue because litigants can’t put a dollar figure on the harm they have suffered. But the Quebec Charter of Rights and Freedoms makes privacy a protected right, and that simplifies the case.
Most privacy and data cases in Canada have been focused on breaches where companies allowed private information to be leaked or stolen by hackers.
But litigation around issues relating to data and privacy could become more common, depending on how the courts respond in this case and future cases.
Sent to us by: Robbie Ferguson
A new paper published by Disney Research describes a fully automated, neural network-based method for swapping faces in photos and videos — the first method that results in high-resolution results of sufficient quality to be used in film and TV.
The researchers specifically intend this tech for use in replacing an existing actor’s performance with a substitute actor’s face, for instance when de-aging or increasing the age of someone, or potentially when portraying an actor who has passed away. They also suggest it could be used for replacing the faces of stunt doubles in cases where the conditions of a scene call for them to be used.
This new method is unique from other approaches in a number of ways, including that any face used in the set can be swapped with any recorded performance, making it possible to relatively easily re-image the actors on demand. The other is that it recreates contrast and lighting conditions to ensure the actor looks like they were actually present in the same conditions as the scene.
You can check out the results for yourself in this video.
As you can see, there’s still a hint of “uncanny valley” going on here, but the researchers acknowledge that in their paper, calling this “a major step toward photo-realistic face swapping that can successfully bridge the uncanny valley”.
It is still a lot more realistic than other attempts, which is especially apparent when you’ve seen the side-by-side comparisons with other techniques. Most notably, it works at much higher resolution, which is key for actual entertainment industry use. Considering the example of using this technique for a stunt double, the realism could come across as being incredibly realistic.
The examples presented are a super small sample, so it remains to be seen how broadly this can be applied. The subjects used appear to be primarily white, for instance. Also, there’s always the question of the ethical implication of any use of face-swapping technology, especially in video, as it could be used to fabricate credible video or photographic “evidence” of something that didn’t actually happen.
Given, however, that the technology is now in development from multiple quarters, it’s essentially long past the time for debate about the ethics of its development and exploration. Instead, it’s welcome that organizations like Disney Research are following the academic path and sharing the results of their work, so that others concerned about its potential malicious use can determine ways to flag, identify and protect against any bad actors.
Sent to us by: Robbie Ferguson
According to a semi-annual ranking announced by the U.S.-European TOP500 project on Monday, Japan’s latest supercomputer ‘Fugaku’ is the world’s fastest for computing speed.
This is the first time that a Japanese supercomputer has taken the top position in 9 years, when the “K computer” – Fugaku’s predecessor – took first place at this time in 2011.
Jointly developed by Japan’s state-backed RIKEN Center for Computational Science and Fujitsu, Fugaku is the first ever ARM-based system to become the world’s fastest supercomputer.
It scored a High-Performance Linpack (HPL) score of 415.5 petaflops, which makes it 2.8 times faster than IBM Summit’s 148.6 petaflops that is now in second place in the Top500 supercomputer rankings.
Fugaku is powered by Fujitsu’s 48-core Arm-based A64FX system-on-chip and consists of nearly 7.3-million CPU cores. In single-precision operations, it reaches peak performance of over 1,000 petaflops, which pushes our vernacular into the next tier at 1 exaflop. The chips run at 2.0 GHz with a boost to 2.2 GHz and carry 32 GB of second-generation High Bandwidth Memory each.
This ARM-based supercomputer also secured the number one position in other rankings that test computers on different parameters, including Graph 500, HPL-AI, and High-Performance Conjugate Gradient. This is the first time that a supercomputer has simultaneously topped the rankings in the above four categories, according to Fujitsu.
Currently installed at the RIKEN Center for Computational Science (R-CCS) in Kobe, Japan, Fugaku will also carry out a wide range of applications that will address “high-priority social and scientific issues.”
While the supercomputer is expected to start full-time operation in April next year, they are already using it in the fight against COVID-19.
In recent years, countries like the U.S. and China have dominated the race to develop powerful machines. This time too China dominated the TOP500 list with 226 supercomputers, while the U.S. took second place with 114 systems, followed by Japan with 30, France with 18, and Germany with 16 systems.
Sent to us by: Robbie Ferguson
Chinese ride-hailing firm Didi Chuxing says it plans to operate more than a million self-driving vehicles by 2030.
According to Didi’s chief operating officer, Meng Xing, the robotaxis are to be deployed in places where ride-hailing drivers are less available.
The company last month completed a more than $500 million fundraising round for the autonomous driving unit, led by SoftBank Group’s Vision Fund 2.
Apple, who is known to be interested in the development of autonomous driving, invested $1bn into Didi back in 2016.
Last year, Didi said it would start using autonomous vehicles to pick up passengers in Beijing, Shanghai, and Shenzhen this year before expanding the scheme outside China in 2021.
Automakers and tech companies in China are investing heavily in the autonomous driving industry to compete with the likes of Tesla, Alphabet, Waymo, and Uber.
While some industry insiders say it will take time for the public to trust autonomous vehicles fully, Meng said Didi expects autonomous vehicles to be in mass production by 2025.
Competitors are already offering robotaxi service, but a fleet of 1 million vehicles would put them all to shame. We'll keep our eye on this and see if Didi is able to deliver.
Sent to us by: Roy W. Nash
Facebook have created a virtual reality headset that's not much larger than a chunky pair of sunglasses.
The futuristic shades use a specially designed holographic film to miniaturize the lens. Conventional VR displays tend to be bulky, as the refractive lenses inside of them need a couple inches to focus the display for its wearer’s eyes.
The result is a pair of glasses that are at most only nine millimeters thick, according to the researchers, and weigh only 17.8 grams. Images from the glasses’ green-and-black display are, frankly, extremely cyberpunk.
The glasses can provide an approximately 90-degree horizontal field of view according to testing detailed in a new paper published by Facebook Research.
The design also does away with the traditional LCD panels used in conventional VR headsets and uses lasers instead to create an image. That means pixel counting isn’t really an option.
It’s still an early prototype with plenty of limitations. Since light rays from the backlight “fan out significantly before they are focused by the holographic beamsplitter surface,” large regions of the display “do not contribute to the display’s [field of view],” the researchers write in the paper.
The glasses are also still not capable of producing a full color image and exhibit “ghosting” around the edges of the field of view caused by optical surfaces reflecting light.
The team already has a plan to reduce weight even further. By switching to plastic substrates, the researchers expect to bring total weight to just 6.6 grams, about the weight of a pair of large aviator-style sunglasses.
This news comes at the same time as Google's acquisition of Canadian smart glasses start-up 'North,' who developed lightweight "Focals" smart glasses, designed with a holographic display that only the user can see.
It would appear augmented reality is still in the sights of these two big players, which could mean some interesting tech in the coming years.
Sent to us by: Robbie Ferguson