The Rasperry Pi Foundation has updated its Compute Module with better thermals, an updated application processor and more flash memory.
The Compute Module 3+, a System on Module (SoM) board, is part of a hardware family that's been around since 2014 with the launch of the CM1. That module had a single-core Arm processor clocked to 700MHz, 512MB RAM and a 4GB eMMC.
Three years later, the Compute Module 3 was released with the 1.2GHz processor of the Pi 3 and 1GB RAM. Here we are two years later, and the Compute Module 3+ is carrying on the tradition, adding the processor from the Pi 3 B+ into the mix.
It's the size of a DDR2-SODIMM and will happily plug into a DDR2-SODIMM connector, but don't go slotting it into your PC because the pins don't do the same thing at all. This is, after all, a complete computer.
The module is aimed at those wishing to embed their Raspberry Pi into their devices with a form factor more suitable for the task. All data and power is dealt with via the DDR2-SODIMM-like connector, making for a very compact device.
The Pi Foundation points to the likes of NEC as an example, where the electronics giant has used the diminutive board as the heart of the some monstrous digital signage. It has also made an appearance in media players and industrial control systems.
The Compute Module 3+ is a drop-in replacement for the previous version from a form factor and electrical perspective. However, power supply limitations will keep the CPU at 1.2GHz instead of the 1.4GHz of the full-size Pi 3 B+.
Storage is perhaps the biggest change. There are now 8, 16 and 32GB versions costing $30, $35 and $40 respectively. There is also a flashless version for those that need it.
The foundation plans to keep Compute Module 3+ available until "at least" January 2026 and, in words that will be bring joy to Pi fans the world over, stated that this is "the last in a line of 40 nanometer-based Raspberry Pi products" indicating a clearing of the decks before the next generation makes an appearance.
Sent to us by: Roy W. Nash
Facebook has revealed plans to integrate WhatsApp, Instagram and Messenger.
While all three will remain stand-alone apps, at a much deeper level they will be linked so messages can travel between the different services.
The plan was first reported in the New York Times and is believed to be a personal project of Facebook founder Mark Zuckerberg.
Once complete, the merger would mean that a Facebook user could communicate directly with someone who only has a WhatsApp account. This is currently impossible as the applications have no common core.
The work to merge the three elements has already begun, and is expected to be completed by the end of 2019 or early next year.
Facebook probably didn't want to talk about this in the middle of a privacy scandal, but its hand was forced by insiders talking to the New York Times.
Until now, WhatsApp, Instagram and Messenger have been run as separate and competing products.
Integrating the messaging parts might simplify Facebook's work. It wouldn't need to develop competing versions of new features, such as Stories, which all three apps have added with inconsistent results.
Cross-platform messaging may also lead the way for businesses on one platform to message potential customers on another.
And it might make it easier for Facebook to share data across the three platforms, to help its targeted advertising efforts.
But bigger still: it makes Facebook's suite of apps a much tighter, interwoven collection of services. That could make the key parts of Facebook's empire more difficult to break up and spin off, if governments and regulators decide that is necessary.
Linking the three systems marks a significant change at Facebook as before now it has let Instagram and WhatsApp operate as largely independent companies.
The decision comes as Facebook faces repeated investigations and criticisms over the way it has handled and safeguarded user data.
Comprehensively linking user data at a fundamental level may prompt regulators to take another look at its data handling practices.
Sent to us by: Bekah Ferguson
Apple says it’s banning Facebook’s research app that collects users’ personal information
Facebook is at the center of another privacy scandal — and this time it hasn’t just angered users. It has also angered Apple.
Apple says Facebook broke an agreement it made with Apple by publishing a “research” app for iPhone users that allowed the social giant to collect all kinds of personal data about those users. The app allowed Facebook to track users’ app history, their private messages, and their location data. Facebook’s research effort reportedly targeted users as young as 13 years old.
As of last summer, apps that collect that kind of data are against Apple’s privacy guidelines. That means Facebook couldn’t make this research app available through the App Store, which would have required Apple approval.
Instead, Facebook apparently took advantage of Apple’s “Developer Enterprise Program,” which lets approved Apple partners, like Facebook, test and distribute apps specifically for their own employees. In those cases, the employees can use third-party services to download beta versions of apps that aren’t available to the general public.
Apple doesn’t review and approve these apps the way it does for the App Store because they’re only supposed to be downloaded by employees who work for the app’s creator.
Facebook, though, used this program to pay non-employees as much as $20 per month to download the research app without Apple’s knowledge.
Apple’s response, via a PR rep this morning: “We designed our Enterprise Developer Program solely for the internal distribution of apps within an organization. Facebook has been using their membership to distribute a data-collecting app to consumers, which is a clear breach of their agreement with Apple. Any developer using their enterprise certificates to distribute apps to consumers will have their certificates revoked, which is what we did in this case to protect our users and their data.”
Facebook pushed back on the idea that it did anything wrong in collecting the user data. Facebook says this program has been ongoing since 2016, which could be evidence that the company wasn’t trying to skirt Apple’s new policies. Facebook did not, however, comment on whether or not it violated Apple’s policies by distributing the app through the Developer Enterprise Program.
There are a lot of reasons Facebook wants to know what apps people are using, which explains why it went to such lengths to get around Apple’s App Store guidelines.
It’s unclear if Facebook’s actual data collection through this research app poses any risks to the company. Facebook did pay users for using the app. But Facebook is also under investigation from the FTC, which is looking into its data privacy practices. Anything that feels fishy will most certainly attract regulators’ attention.
Sent to us by: Solbu
IBM hopes 1 million faces will help fight bias in facial recognition
IBM thinks the data being used to train facial recognition systems isn't diverse enough.
The tech giant released a trove of data containing 1 million images of faces taken from a Flickr dataset with 100 million photos and videos.
The images are annotated with tags related to features including craniofacial measurements, facial symmetry, age and gender.
Researchers at the company hope that these specific details will help developers train their artificial intelligence-powered facial recognition systems to identify faces more fairly and accurately.
John Smith, a fellow and lead scientist at IBM said, "Facial recognition technology should be fair and accurate. In order for the technology to advance it needs to be built on diverse training data."
Smith stressed the importance of variety in datasets for facial recognition systems to reflect real-world diversity and reduce the rate of error in matching a face to a person.
Experts have warned on the potential for artificial intelligence to be biased. Research has shown that facial recognition technology is much more adept at making out the faces of white males than it is with minorities.
IBM itself has been the target of criticism over its facial recognition system. A paper by MIT researcher Joy Buolamwini, published last year, found that IBM Watson's visual recognition platform had an almost 35 percent error rate when it came to identifying darker-skinned females, and a less than 1 percent error rate for identifying lighter-skinned males.
Studies such as this have heightened concerns over the use of facial recognition in areas like law enforcement, and the potential for AI-powered racial profiling.
Sent to us by: Robbie Ferguson