Tag: tech

  • Amazon’s updated grocery delivery program has some strings attached

    Will Shanklin

    After asserting itself as an overshadowing presence in retail, Amazon is still experimenting with ways to leave a similar mark in groceries. The company’s latest tweak to its service lowers the minimum price for free grocery deliveries to $35. However, most customers using the service will also need to pay a $10 monthly subscription in addition to having a Prime membership ($15 monthly or $139 annually).

    To participate, you must live in one of the 3,500 supported cities and towns in the US. (When signing up, it will let you know if your primary shipping address isn’t supported.) The service offers unlimited grocery deliveries from Amazon Fresh, the Amazon-owned Whole Foods and various local and specialty partners. Those include Cardenas Markets, Save Mart, Bartell Drugs, Rite Aid, Pet Food Express, Mission Wine & Spirits and more.

    The subscription includes one-hour delivery windows where available, unlimited 30-minute pickup orders and priority access to the company’s Recurring Reservations. This feature lets you pick a guaranteed weekly grocery delivery window. To use it, you’ll need to pick your weekly two-hour slot at least 24 hours in advance.

    An Amazon Fresh worker in a neon green vest loads Prime Fresh groceries into a hatchback with a child grinning in the backseat. The person loading the groceries smiles back.
    Amazon

    People using the Supplemental Nutrition Assistance Program (SNAP) and other government assistance programs can get the same grocery delivery benefits for half the price ($5 monthly). If you fall in that camp, you can get those perks without needing a Prime subscription on top of the subscription fee.

    It remains to be seen if this latest iteration of the program will stick since Amazon’s strategy has been all over the place. Early last year, the company increased the minimum checkout price for free grocery deliveries from $35 to $150, then dropped it to $100 (while voiding the Prime requirement) about 10 months later. If you like this version of the program, cross your fingers that Amazon doesn’t change it again in a few months.

    Before rolling out the program’s latest version on Tuesday, Amazon tested it in Columbus, OH, Denver, CO, and Sacramento, CA, in late 2023. The company says over 85 percent of survey respondents who used the service were “extremely” or “very” satisfied, leaving high marks for its convenience and savings on delivery fees.

    You can see if the program is available in your area on Amazon’s groceries sign-up page. If it is, you can try it free for 30 days before paying.

    This article originally appeared on Engadget at https://www.engadget.com/amazons-updated-grocery-delivery-program-has-some-strings-attached-171513989.html?src=rss

    Go Here to Read this Fast! Amazon’s updated grocery delivery program has some strings attached

    Originally appeared here:
    Amazon’s updated grocery delivery program has some strings attached

  • 8BitDo’s Nintendo-style Retro Mechanical Keyboard hits a new low of $70 at Woot

    Jeff Dunn

    If you’re in the market for a new mechanical keyboard with some retro flair, here’s a deal worth noting: the 8BitDo Retro Mechanical Keyboard is down to $70 at Amazon subsidiary Woot. That’s the lowest price we’ve tracked. This offer has been live for a few days, but it comes in $30 below 8BitDo’s list price and $10 below the wireless keyboard’s previous low. Unfortunately, the deal only applies to the device’s Fami Edition, which has a color scheme and Japanese characters inspired by the Famicom console Nintendo released in Japan during the ’80s. 8BitDo sells another variant that’s modeled after the US NES, but that one costs $20 more as of this writing. (A third model based on the Commodore 64 is also on the way.) 

    Though it isn’t a formal pick in our guide to the best mechanical keyboards, the Retro Mechanical Keyboard earned a spot in our retro gaming gift guide last year. The vintage aesthetic is the main reason to consider it: If you dig old tech, there aren’t many options going for this kind of look. Still, this is a solid keyboard in its own right. Its tenkeyless form factor should be comfortable for most people, and it can connect over Bluetooth, a wireless dongle or a detachable USB-C cable. While it’s made from plastic, the chassis doesn’t come off as cheap. Its PBT keycaps are crisply textured, and its keys largely feel stable, with no major rattling on larger inputs like the space bar. It also comes with a goofy yet fun pair of NES-style “Super Buttons,” which you can program to perform different commands.

    Be warned, though: It’s on the louder side. The Retro Mechanical Keyboard ships with clicky Kailh Box White V2 switches, which are generally satisfying to press but have a high-pitch tone that your spouse or coworkers may find aggravating. This fits with the retro aesthetic, but the keyboard might be best kept tucked away in a home office. There’s also no backlight or adjustable feet. The switches are hot-swappable, however, so it’s easy to change them out for a different feel down the road. 

    In the end, how much you enjoy the old-school styling will determine whether the Retro Mechanical Keyboard is worth getting. If you want something a little more subdued that costs less than $100, we recommend Keychron’s V Max series in our buying guide. But 8BitDo’s board is still a decent value, and this discount only furthers that. Woot says the offer will run for six more days or until the device sells out.

    Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.

    This article originally appeared on Engadget at https://www.engadget.com/8bitdos-nintendo-style-retro-mechanical-keyboard-hits-a-new-low-of-70-at-woot-170000966.html?src=rss

    Go Here to Read this Fast!

    8BitDo’s Nintendo-style Retro Mechanical Keyboard hits a new low of $70 at Woot

    Originally appeared here:

    8BitDo’s Nintendo-style Retro Mechanical Keyboard hits a new low of $70 at Woot

  • Your old Rock Band guitars now work in Fortnite Festival

    Kris Holt

    You may be able to give those plastic Rock Band guitars you have stuffed away in the attic a new lease of life. Fortnite Festival (a Rock Band-style mode that debuted in Fortnite in December) now supports several Rock Band 4 controllers across PlayStation, Xbox and PC, as detailed in a blog post.

    If you have a compatible plastic guitar, you can use it to play new Pro Lead and Pro Bass parts in any Jam Track. These parts have colored notes for each lane that match with the guitar controller buttons. They also include hammer-on and pull-off notes — just like Rock Band and Guitar Hero.

    Epic Games (which bought Rock Band developer Harmonix in 2021 to build music experiences for Fortnite) plans to add support for more peripherals down the line. Hopefully, the developers will make the whammy bar more useful beyond triggering a visual effect too.

    Epic previously said it would add support for Rock Band guitars. Earlier this year, third-party peripheral maker PDP (which Turtle Beach recently purchased) unveiled a new Xbox and PlayStation wireless guitar controller for Rock Band 4 and Fortnite Festival.

    Support for the Rock Band peripherals come just as Billie Eilish joins the game as its new music icon. Several of her songs are available to buy and use in Fortnite Festival, and you’ll be able to purchase an Eilish outfit (or unlock one through a secondary battle pass) and play as her in the Battle Royale mode.

    Meanwhile, Epic has added a setting that allows players to hide certain emotes that others often use for trolling in Battle Royale. For instance, after being eliminated, a player might not want to see a rival using the “Take the L” emote, which involves making the shape of an “L” (for “loser”) on their forehead and doing a silly dance. The setting won’t stop players from using any emotes and it only hides four of them for now. Somehow, one of the emotes that the setting doesn’t hide is a personal favorite called “Rage Quit.”

    This article originally appeared on Engadget at https://www.engadget.com/your-old-rock-band-guitars-now-work-in-fortnite-festival-164054839.html?src=rss

    Go Here to Read this Fast! Your old Rock Band guitars now work in Fortnite Festival

    Originally appeared here:
    Your old Rock Band guitars now work in Fortnite Festival

  • Differential Privacy and Federated Learning for Medical Data

    Eric Boernert

    A practical assessment of Differential Privacy & Federated Learning in the medical context.

    (Bing AI generated image, original, full ownership)

    Sensitive data cries out for more protection

    The need for data privacy seems to be generally at ease nowadays in the era of large language models trained on everything from the public internet, regardless of actual intellectual property which their respective company leaders openly admit.

    But there’s a much more sensitive parallel universe when it comes to patients’ data, our health records, which are undoubtedly much more sensitive and in need of protection.

    Also the regulations are getting stronger all over the world, the trend is unanimously towards more stricter data protection regulations, including AI.

    There are obvious ethical reasons which we don’t have to explain, but from the enterprise levels regulatory and legal reasons that require pharmaceutical companies, labs and hospitals to use state of the art technologies to protect data privacy of patients.

    Federated paradigm is here to help

    Federated analytics and learning are great options to be able to analyze data and train models on patients’ data without accessing any raw data.

    In case of federated analytics it means, for instance, we can get correlation between blood glucose and patients BMI without accessing any raw data that could lead to patients re-identification.

    In the case of machine learning, let’s use the example of diagnostics, where models are trained on patients’ images to detect malignant changes in their tissues and detect early stages of cancer, for instance. This is literally a life saving application of machine learning. Models are trained locally at the hospital level using local images and labels assigned by professional radiologists, then there’s aggregation which combines all those local models into a single more generalized model. The process repeats for tens or hundreds of rounds to improve the performance of the model.

    Fig. 1. Federated learning in action, sharing model updates, not data.

    The reward for each individual hospital is that they will benefit from a better trained model able to detect disease in future patients with higher probability. It’s a win-win situation for everyone, especially patients.

    Of course there’s a variety of federated network topologies and model aggregation strategies, but for the sake of this article we tried to focus on the typical example.

    Trust building with the help of technology

    It’s believed that vast amounts of clinical data are not being used due to a (justified) reluctance of data owners to share their data with partners.

    Federated learning is a key strategy to build that trust backed up by the technology, not only on contracts and faith in ethics of particular employees and partners of the organizations forming consortia.

    First of all, the data remains at the source, never leaves the hospital, and is not being centralized in a single, potentially vulnerable location. Federated approach means there aren’t any external copies of the data that may be hard to remove after the research is completed.

    The technology blocks access to raw data because of multiple techniques that follow defense in depth principle. Each of them is minimizing the risk of data exposure and patient re-identification by tens or thousands of times. Everything to make it economically unviable to discover nor reconstruct raw level data.

    Data is minimized first to expose only the necessary properties to machine learning agents running locally, PII data is stripped, and we also use anonymization techniques.

    Then local nodes protect local data against the so-called too curious data scientist threat by allowing only the code and operations accepted by local data owners to run against their data. For instance model training code deployed locally at the hospital as a package is allowed or not by the local data owners. Remote data scientists cannot just send any code to remote nodes as that would allow them for instance to return raw level data. This requires a new, decentralized way of thinking to embrace different mindset and technologies for permission management, an interesting topic for another time.

    Are models private enough?

    Assuming all those layers of protection are in place there’s still concern related to the safety of model weights themselves.

    There’s growing concern in the AI community about machine learning models as the super compression of the data, not as black-boxy as previously considered, and revealing more information about the underlying data than previously thought.

    And that means that with enough skills, time, effort and powerful hardware a motivated adversary can try to reconstruct the original data, or at least prove with high probability that a given patient was in the group that was used to train the model (Membership Inference Attack (MIA)) . Other types of attacks possible such as extraction, reconstruction and evasion.

    To make things even worse, the progress of generative AI that we all admire and benefit from, delivers new, more effective techniques for image reconstruction (for example, lung scan of the patients). The same ideas that are used by all of us to generate images on demand can be used by adversaries to reconstruct original images from MRI/CT scan machines. Other modalities of data such as tabular data, text, sound and video can now be reconstructed using gen AI.

    Differential Privacy to the rescue

    Differential privacy (DP) algorithms promise that we exchange some of the model’s accuracy for much improved resilience against inference attacks. This is another privacy-utility trade-off that is worth considering.

    Differential privacy means in practice we add a very special type of noise and clipping, that in return will result in a very good ratio of privacy gains versus accuracy loss.

    It can be as easy as least effective Gaussian noise but nowadays we embrace the development of much more sophisticated algorithms such as Sparse Vector Technique (SVT), Opacus library as practical implementation of differentially private stochastic gradient descent (DP-SGD), plus venerable Laplacian noise based libraries (i.e. PyDP).

    Fig. 2. On device differential privacy that we all use all the time.

    And, by the way, we all benefit from this technique without even realizing that it even exists, and it is happening right now. Our telemetry data from mobile devices (Apple iOS, Google Android) and desktop OSes (Microsoft Windows) is using differential privacy and federated learning algorithms to train models without sending raw data from our devices. And it’s been around for years now.

    Now, there’s growing adoption for other use cases including our favorite siloed federated learning case, with relatively few participants with large amounts of data in on-purpose established consortia of different organizations and companies.

    Differential privacy is not specific to federated learning. However, there are different strategies of applying DP in federated learning scenarios as well as different selection of algorithms. Different algorithms which work better for federated learning setups, different for local data privacy (LDP) and centralized data processing.

    In the context of federated learning we anticipate a drop in model accuracy after applying differential privacy, but still (and to some extent hopefully) expect the model to perform better than local models without federated aggregation. So the federated model should still retain its advantage despite added noise and clipping (DP).

    Fig. 3. What we can expect based on known papers and our experiences.

    Differential privacy can be applied as early as at the source data (Local Differential Privacy (LDP)).

    Fig. 4, different places where DP can be applied to improve data protection

    There are also cases of federated learning within a network of partners who have all data access rights and are less concerned about data protection levels so there might be no DP at all.

    On the other hand when the model is going to be shared with the outside world or sold commercially it might be a good idea to apply DP for the global model as well.

    Practical experimentation results

    At Roche’s Federated Open Science team, NVIDIA Flare is our tool of choice for federated learning as the most mature open source federated framework on the market. We also collaborate with the NVIDIA team on future development of NVIDIA Flare and are glad to help to improve an already great solution for federated learning.

    We tested three different DP algorithms:

    We applied differential privacy for the models using different strategies:

    • Every federated learning round
    • Only the first round (of federated training)
    • Each Nth round (of federated training)

    for three different cases (datasets and algorithms):

    • FLamby Tiny IXI dataset
    • Breast density classification
    • Higgs classification

    So, we tried three dimensions of algorithm, strategy and dataset (case).

    The results are conforming with our expectations of model accuracy degradation that was larger with lower privacy budgets (as expected).

    FLamby Tiny IXI dataset

    (Dataset source: https://owkin.github.io/FLamby/fed_ixi.html)

    Fig. 5. Models performance without DP

    Fig. 6. Models performance with DP on first round

    Fig. 7. SVT applied every second round (with decreasing threshold)

    We observe significant improvement of accuracy with SVT applied on the first round compared to SVT filter applied to every round.

    Breast density case

    (Dataset source Breast Density Classification using MONAI | Kaggle)

    Fig. 8. Models performance without DP

    Fig. 9. DP applied to the first round

    We observe a mediocre accuracy loss after applying a Gaussian noise filter.

    This dataset was the most troublesome and sensitive to DP (major accuracy loss, unpredictability).

    Higgs classification

    (Dataset source HIGGS — UCI Machine Learning Repository)

    Fig. 10. Models performance with percentile value 95.

    Fig. 11. Percentile value 50.

    We observe minor, acceptable accuracy loss related to DP.

    Lessons learned

    Important lesson learned is that differential privacy outcomes are very sensitive to parameters of a given DP algorithm and it’s hard to tune it to avoid total collapse of model accuracy.

    Also, we experienced some kind of anxiety, based on the impression of not really really knowing how much privacy protection we have gained for what price. We only saw the “cost” side (accuracy degradation).

    We had to rely to a major extent on known literature, that says and demonstrated, that even small amounts of DP-noise are helping to secure data.

    As engineers, we’d like to see some type of automatic measure that would prove how much privacy we gained for how much accuracy lost, and maybe even some kind of AutoDP tuning. Seems to be far, far away from the current state of technology and knowledge.

    Then we applied privacy meters to see if there’s a visible difference between models without DP versus models with DP and we observed changes in the curve, but it’s really hard to quantify how much we gained.

    Some algorithms didn’t work at all, some required many attempts to tune them properly to deliver viable results. There was no clear guidance on how to tune different parameters for particular dataset and ML models.

    So our current opinion is that DP for FL is hard, but totally feasible. It requires a lot of iterations, and trial and error loops to achieve acceptable results while believing in privacy improvements of orders of magnitude based on the trust in algorithms.

    The future

    Federated learning is a great option to improve patients’ outcomes and treatment efficacy because of the improved ML models while preserving patients’ data.

    But data protection never comes without any price and differential privacy for federated learning is a perfect example of that trade-off.

    It’s great to see improvements in algorithms of differential privacy for federated learning scenarios to minimize the impact on accuracy while maximizing resilience of models against inference attacks.

    As with all trade-offs the decisions have to be made balancing usefulness of models for practical applications against the risks of data leakage and reconstruction.

    And that’s where our expectation for privacy meters are growing to know more precisely what we are selling and we are “buying”, what the exchange ratio is.

    The landscape is dynamic, with better tools available for both those who want to better protect their data and those who are motivated to violate those rules and expose sensitive data.

    We also invite other federated minds to build upon and contribute to the collective effort of advancing patient data privacy for Federated Learning.

    The author would like to thank Jacek Chmiel for significant impact on the blog post itself, as well as the people who helped develop these ideas and bring them to practice: Jacek Chmiel, Lukasz Antczak and Grzegory Gajda.

    All images in this article were created by the authors.


    Differential Privacy and Federated Learning for Medical Data was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story.

    Originally appeared here:
    Differential Privacy and Federated Learning for Medical Data

    Go Here to Read this Fast! Differential Privacy and Federated Learning for Medical Data

  • Apple teases new iPad Pro & Air event with multiple animated logos

    Apple teases new iPad Pro & Air event with multiple animated logos

    Alongside its main logo promoting the forthcoming iPad event, Apple’s homepage is rotating through half a dozen variants.

    Artistic representations of three hands holding a paintbrush, an apple, and a butterfly, in colorful, abstract, and fluid design styles.
    Three of the Apple logos being used to promote the May iPad event

    When Apple announced its May 7, 2024, event where it’s expected to launch new editions of the iPad Air and iPad Pro, it did so with an image of an Apple logo featuring an Apple Pencil. That remains the event image, and it’s the only one shown on Apple’s event page, but the company’s homepage is showing more.

    CEO Tim Cook has also shown off an animated video of the logos and transitions between them on social media.

    Continue Reading on AppleInsider | Discuss on our Forums

    Go Here to Read this Fast!

    Apple teases new iPad Pro & Air event with multiple animated logos

    Originally appeared here:

    Apple teases new iPad Pro & Air event with multiple animated logos