Tag: tech

  • Google’s Gemini Deep Research tool is here to answer your most complicated questions

    Igor Bonifacic

    When Google debuted Gemini 1.5 Pro in February, the company touted the model’s ability to reason through what it called “long context windows.” It said, for example, the algorithm could provide details about a 402-page Apollo 11 mission transcript. Now, Google is giving people a practical way to take advantage of those capabilities with a tool called Deep Research. Starting today, Gemini Advanced users can use Deep Research to create comprehensive but easy-to-read reports on complex topics.

    Aarush Selvan, a senior product manager on the Gemini team, gave Engadget a preview of the tool. At first glance, it looks to work like any other AI chatbot. All interactions start with a prompt. In the demo I saw, Selvan asked Gemini to help him find scholarship programs for students who want to enter public service after school. But things diverge from there. Before answering a query, Gemini first produces a multi-step research plan for the user to approve.

    For example, say you want Gemini to provide you with a report on heat pumps. In the planning stage, you could tell the AI agent to prioritize information on government rebates and subsidies or omit those details altogether. Once you give Gemini the go-ahead, it will then scour the open web for information related to your query. This process can take a few minutes. In user testing, Selvan said Google found most people were happy to wait for Gemini to do its thing since the reports the agent produces through Deep Research are so detailed.

    In the example of the scholarship question, the tool produced a multi-page report complete with charts. Throughout, there were citations with links to all of the sources Gemini used. I didn’t get a chance to read over the reports in detail, but they appeared to be more accurate than some of Google’s less helpful and flattering AI Overviews.  

    According to Selvan, Deep Research uses some of the same signals Google Search does to determine authority. That said, sourcing is definitely “a product of the query.” The more complicated a question you ask of the agent, the more likely it is to produce a useful answer since its research is bound to lead it to more authoritative sources. You can export a report to Google Docs once you’re happy with Gemini’s work.

    If you want to try Deep Research for yourself, you’ll need to sign up for Google’s One AI Premium Plan, which includes access to Gemini Advanced. The plan costs $20 per month following a one-month free trial. It’s also only available in English at the moment. 

    This article originally appeared on Engadget at https://www.engadget.com/ai/googles-gemini-deep-research-tool-is-here-to-answer-your-most-complicated-questions-154354424.html?src=rss

    Go Here to Read this Fast!

    Google’s Gemini Deep Research tool is here to answer your most complicated questions

    Originally appeared here:

    Google’s Gemini Deep Research tool is here to answer your most complicated questions

  • One of our favorite Anker power banks drops to only $20

    Sarah Fielding

    I am a huge fan of Anker so I typically end up buying something every time the brand’s products go on sale. Well, my wallet is currently grumbling at me because it’s that time again: A slew of Anker products are discounted on Amazon. This sale includes Anker’s 3-in-1 5,000mAh USB-C portable charger in black, down to $20 from $40. The new all-time low price comes courtesy of a 38 percent discount, followed up with a $5 coupon. 

    Anker makes up a good chunk of our best power bank and portable charger list for 2024. This particular portable charger is worth calling out because, among other things, its compact and has a 22.5W output as a battery or 30W output when plugged into the wall. It also has a foldable AC plug, a USB-C port and an integrated USB-C cable. 

    If you’re looking for a longer charge then check out Anker’s 10,000mAh version of the 3-in-1 power bank. It’s also down to a record-low price at $30 from $45 in every color — a 33 percent discount. It’s comes with a USB-C cable, but provides 30W of output whether it’s plugged in or used as a battery. 

    Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.

    This article originally appeared on Engadget at https://www.engadget.com/deals/one-of-our-favorite-anker-power-banks-drops-to-only-20-153018011.html?src=rss

    Go Here to Read this Fast! One of our favorite Anker power banks drops to only $20

    Originally appeared here:
    One of our favorite Anker power banks drops to only $20

  • The best stocking stuffer gifts under $50

    Valentina Palladino

    It’s easy to assume that the best tech gifts are the most expensive things. But there are plenty of options out there for the techie in your life that don’t require you to empty your wallet. If you’re struggling to come up with a gift for a coworker, family member or friend who’s an early adopter or a tech obsessive, we’ve gathered some of our favorite things that are both small and affordable. The best part: All of these gift ideas come in at $50 or less.

    Check out the rest of our gift ideas here.

    This article originally appeared on Engadget at https://www.engadget.com/computing/accessories/the-best-stocking-stuffer-gifts-under-50-130049792.html?src=rss

    Go Here to Read this Fast! The best stocking stuffer gifts under $50

    Originally appeared here:
    The best stocking stuffer gifts under $50

  • The best white elephant gifts that everyone will love

    Engadget

    Whether or not you’ve heard of a white elephant gift exchange before, there’s a good chance you have the wrong idea of what it is, how it actually works and where the idea came from. According to legend, the King of Siam would give a white elephant to courtiers who had upset them. It was a far more devious punishment than simply having them executed. The recipient had no choice but to simply thank the king for such an opulent gift, knowing that they likely could not afford the upkeep for such an animal. It would inevitably lead them to financial ruin.

    This story is almost certainly untrue, but it has led to a modern holiday staple: the white elephant gift exchange. Picking the right white elephant gift means walking a fine line: the goal isn’t to just buy something terrible and force someone to take it home with them. Rather, it should be just useful or amusing enough that it won’t immediately get tossed into the trash. The recipient also shouldn’t be able to just throw it in a junk drawer and forget about it. So here are a few suggestions that will not only get you a few chuckles, but will also make the recipient feel (slightly) burdened.

    A white elephant gift exchange is a party game typically played around the holidays in which people exchange funny, impractical gifts.

    A group of people each bring one wrapped gift to the white elephant gift exchange, and each gift is typically of a similar value. All gifts are then placed together and the group decides the order in which they will each claim a gift. The first person picks a white elephant gift from the pile, unwraps it and their turn ends. The following players can either decide to unwrap another gift and claim it as their own, or steal a gift from someone who has already taken a turn. The rules can vary from there, including the guidelines around how often a single item can be stolen — some say twice, max. The game ends when every person has a white elephant gift.

    The term “white elephant” is said to come from the legend of the King of Siam gifting white elephants to courtiers who upset him. While it seems like a lavish gift on its face, the belief is that the courtiers would be ruined by the animal’s upkeep costs.

    Check out the rest of our gift ideas here.

    This article originally appeared on Engadget at https://www.engadget.com/the-best-white-elephant-gifts-that-everyone-will-love-150516491.html?src=rss

    Go Here to Read this Fast! The best white elephant gifts that everyone will love

    Originally appeared here:
    The best white elephant gifts that everyone will love

  • Donald Trump names Andrew Ferguson as new FTC chair

    Steve Dent

    Donald Trump has named current commissioner Andrew Ferguson as the new Federal Trade Commission (FTC) chair, the president-elect posted on Truth Social. Ferguson has previously decried what he called censorship by big tech and worked as an antitrust enforcer for the FTC and Department of Justice. 

    “At the FTC, we will end Big Tech’s vendetta against competition and free speech,” Ferguson wrote on X. “We will make sure that America is the world’s technological leader and the best place for innovators to bring new ideas to life.”

    Ferguson will take over from Linda Khan, who drew the ire of big tech by launching antitrust cases against Apple and Google, while also blocking a number of multi-billion-dollar corporate mergers. Under her hand, the FTC has gone after large companies of all stripes over concerns that large mergers would undermine competition and harm consumers with higher prices. 

    Ferguson’s pitch for the job reportedly included plans to “reverse Lina Khan’s anti-business agenda” and “hold big tech accountable and stop censorship,” according to Punchbowl News. However, he stated that the agency should continue its strong scrutiny of the dominance of large tech firms, according to The New York Times‘ sources. 

    With Khan leaving her post as FTC chair, Trump also appointed Mark Meador as commissioner, which will result in a Republican majority on the five-person commission. Trump previously named commission member Brendan Carr as FCC chairman. 

    This article originally appeared on Engadget at https://www.engadget.com/general/donald-trump-names-andrew-ferguson-as-new-ftc-chair-143009879.html?src=rss

    Go Here to Read this Fast! Donald Trump names Andrew Ferguson as new FTC chair

    Originally appeared here:
    Donald Trump names Andrew Ferguson as new FTC chair

  • Firefox will no longer support “do not track” feature

    Jeremy Gan

    Mozilla has removed the “Do Not Track” (DNT) feature that had been present in Firefox since 2009, according to Windows Report. It was the first browser to adopt the feature. This change will arrive to all users who install version 135 and beyond, but Nightly users who opt to test experimental builds can already see the option missing from their browser settings.

    Firefox isn’t the first browser to remove the DNT function. In fact, Apple had already done so in 2019 for Safari.

    Before decrying Mozilla’s decision, it’s crucial to understand what DNT is. It’s not an order but merely a suggestion to websites to stop tracking you. However, most websites ignore DNT requests, meaning it’s completely useless in today’s context. Firefox’s help page also now reflects this upcoming change.

    Instead of a DNT request, Mozilla is asking Firefox users to select the “Tell websites not to sell or share my data” feature. This setting leverages Global Privacy Control (GPC), which is respected by more websites and even enforced in certain jurisdictions.

    If you’re more privacy-conscious, using GPC may not be enough for your needs. We recommend a VPN.

    This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/firefox-will-no-longer-support-do-not-track-feature-141543907.html?src=rss

    Go Here to Read this Fast! Firefox will no longer support “do not track” feature

    Originally appeared here:
    Firefox will no longer support “do not track” feature

  • Measuring the Cost of Production Issues on Development Teams

    Measuring the Cost of Production Issues on Development Teams

    David Tran

    Deprioritizing quality sacrifices both software stability and velocity, leading to costly issues. Investing in quality boosts speed and outcomes.

    Image by the author. (AI generated Midjourney)

    Investing in software quality is often easier said than done. Although many engineering managers express a commitment to high-quality software, they are often cautious about allocating substantial resources toward quality-focused initiatives. Pressed by tight deadlines and competing priorities, leaders frequently face tough choices in how they allocate their team’s time and effort. As a result, investments in quality are often the first to be cut.

    The tension between investing in quality and prioritizing velocity is pivotal in any engineering organization and especially with more cutting-edge data science and machine learning projects where delivering results is at the forefront. Unlike traditional software development, ML systems often require continuous updates to maintain model performance, adapt to changing data distributions, and integrate new features. Production issues in ML pipelines — such as data quality problems, model drift, or deployment failures — can disrupt these workflows and have cascading effects on business outcomes. Balancing the speed of experimentation and deployment with rigorous quality assurance is crucial for ML teams to deliver reliable, high-performing models. By applying a structured, scientific approach to quantify the cost of production issues, as outlined in this blog post, ML teams can make informed decisions about where to invest in quality improvements and optimize their development velocity.

    Quality often faces a formidable rival: velocity. As pressure to meet business goals and deliver critical features intensifies, it becomes challenging to justify any approach that doesn’t directly
    drive output. Many teams reduce non-coding activities to the bare minimum, focusing on unit tests while deprioritizing integration tests, delaying technical improvements, and relying on observability tools to catch production issues — hoping to address them only if they arise.

    Balancing velocity and quality is rarely a straightforward choice, and this post doesn’t aim to simplify it. However, what leaders often overlook is that velocity and quality are deeply connected. By deprioritizing initiatives that improve software quality, teams may end up with releases that are both bug-ridden and slow. Any gains from pushing more features out quickly
    can quickly erode, as maintenance problems and a steady influx of issues ultimately undermine the team’s velocity.

    Only by understanding the full impact of quality on velocity and the expected ROI of quality initiatives can leaders make informed decisions about balancing their team’s backlog.

    In this post, we will attempt to provide a model to measure the ROI of investment in two aspects of improving release quality: reducing the number of production issues, and reducing the time spent by the teams on these issues when they occur.

    Escape defects, the bugs that make their way to production

    Preventing regressions is probably the most direct, top-of-the-funnel measure to reduce the overhead of production issues on the team. Issues that never occurred will not weigh the team down, cause interruptions, or threaten business continuity.

    As appealing as the benefits might be, there is an inflection point after which defending the code from issues can slow releases to a grinding halt. Theoretically, the team could triple the number of required code reviews, triple investment in tests, and build a rigorous load testing apparatus. It will find itself preventing more issues but also extremely slow to release any new content.

    Therefore, in order to justify investing in any type of effort to prevent regressions, we need to understand the ROI better. We can try to approximate the cost saving of each 1% decrease in regressions on the overall team performance to start establishing a framework we can use to balance quality investment.

    Image by the author.

    The direct gain of preventing issues is first of all with the time the team spends handling these issues. Studies show teams currently spend anywhere between 20–40% of their time working on production issues — a substantial drain on productivity.

    What would be the benefit of investing in preventing issues? Using simple math we can start estimating the improvement in productivity for each issue that can be prevented in earlier stages of the development process:

    Image by the author.

    Where:

    • Tsaved​ is the time saved through issue prevention.
    • Tissues is the current time spent on production issues.
    • P is the percentage of production issues that could be prevented.

    This framework aids in assessing the cost vs. value of engineering investments. For example, a manager assigns two developers a week to analyze performance issues using observability data. Their efforts reduce production issues by 10%.

    In a 100-developer team where 40% of time is spent on issue resolution, this translates to a 4% capacity gain, plus an additional 1.6% from reduced context switching. With 5.6% capacity reclaimed, the investment in two developers proves worthwhile, showing how this approach can guide practical decision-making.

    It’s straightforward to see the direct impact of preventing every single 1% of production regressions on the team’s velocity. This represents work on production regressions that the team would not need to perform. The below table can give some context by plugging in a few values:

    Given this data, as an example, the direct gain in team resources for each 1% improvement for a team that spends 25% of its time dealing with production issues would be 0.25%. If the team were able to prevent 20% of production issues, it would then mean 5% back to the engineering team. While this might not sound like a sizeable enough chunk, there are other costs related to issues we can try to optimize as well for an even bigger impact.

    Mean Time to Resolution (MTTR): Reducing Time Lost to Issue Resolution

    In the previous example, we looked at the productivity gain achieved by preventing issues. But what about those issues that can’t be avoided? While some bugs are inevitable, we can still minimize their impact on the team’s productivity by reducing the time it takes to resolve them — known as the Mean Time to Resolution (MTTR).

    Typically, resolving a bug involves several stages:

    1. Triage/Assessment: The team gathers relevant subject matter experts to determine the severity and urgency of the issue.
    2. Investigation/Root Cause Analysis (RCA): Developers dig into the problem to identify the underlying cause, often the most time-consuming phase.
    3. Repair/Resolution: The team implements the fix.
    Image by the author.

    Among these stages, the investigation phase often represents the greatest opportunity for time savings. By adopting more efficient tools for tracing, debugging, and defect analysis, teams can streamline their RCA efforts, significantly reducing MTTR and, in turn, boosting productivity.
    During triage, the team may involve subject matter experts to assess if an issue belongs in the backlog and determine its urgency. Investigation and root cause analysis (RCA) follows, where developers dig into the problem. Finally, the repair phase involves writing code to fix the issue.
    Interestingly, the first two phases, especially investigation and RCA, often consume 30–50% of the total resolution time. This stage holds the greatest potential for optimization, as the key is improving how existing information is analyzed.

    To measure the effect of improving the investigation time on the team velocity we can take the the percentage of time the team spends on an issue and reduce the proportional cost of the investigation stage. This can usually be accomplished by adopting better tooling for tracing, debugging, and defect analysis. We apply similar logic to the issue prevention assessment in order to get an idea of how much productivity the team could gain with each percentage of reduction in investigation time.

    Image by the author.
    1. Tsaved : Percentage of team time saved
    2. R: Reduction in investigation time
    3. T_investigation : Time per issue spent on investigation efforts
    4. T_issues : Percentage of time spent on production issues

    We can test out what would be the performance gain relative to the T_investigationand T_issuesvariables. We will calculate the marginal gain for each percent of investigation time reduction R .

    As these numbers begin to add up the team can achieve a significant gain. If we are able to improve investigation time by 40%, for example, in a team that spends 25% of its time dealing with production issues, we would be reclaiming another 4% of that team’s productivity.

    Combining the two benefits

    With these two areas of optimization under consideration, we can create a unified formula to measure the combined effect of optimizing both issue prevention and the time the team spends on issues it is not able to prevent.

    Image by the author.

    Going back to our example organization that spends 25% of the time on prod issues and 40% of the resolution time per issue on investigation, a reduction of 40% in investigation time and prevention of 20% of the issues would result in an 8.1% improvement to the team productivity. However, we are far from done.

    Accounting for the hidden cost of context-switching

    Each of the above naive calculations does not take into account a major penalty incurred by work being interrupted due to unplanned production issues — context switching (CS). There are numerous studies that repeatedly show that context switching is expensive. How expensive? A penalty of anywhere between 20% to 70% extra work because of interruptions and switching between several tasks. In reducing interrupted work time we can also reduce the context switching penalty.

    Our original formula did not account for that important variable. A simple though naive way of doing that would be to assume that any unplanned work handling production issues incur an equivalent context-switching penalty on the backlog items already assigned to the team. If we are able to save 8% of the team velocity, that should result in an equivalent reduction of context switching working on the original planned tasks. In reducing 8% of unplanned work we have also therefore reduced the CS penalty of the equivalent 8% of planned work the team needs to complete as well.

    Let’s add that to our equation:

    Image by the author.

    Continuing our example, our hypothetical organization would find that the actual impact of their improvements is now a little over 11%. For a dev team of 80 engineers, that would be more than 8 developers free to do something else to contribute to the backlog.

    Use the ROI calculator

    To make things easier, I’ve uploaded all of the above formulas as a simple HTML calculator that you can access here:

    ROI Calculator

    Measuring ROI is key

    Production issues are costly, but a clear ROI framework helps quantify the impact of quality improvements. Reducing Mean Time to Resolution (MTTR) through optimized triage and investigation can boost team productivity. For example, a 40% reduction in investigation time
    recovers 4% of capacity and lowers the hidden cost of context-switching.

    Use the ROI Calculator to evaluate quality investments and make data-driven decisions. Access it here to see how targeted improvements enhance efficiency.

    References:
    1. How Much Time Do Developers Spend Actually Writing Code?
    2. How to write good software faster (we spend 90% of our time debugging)
    3. Survey: Fixing Bugs Stealing Time from Development
    4. The Real Costs of Context-Switching


    Measuring the Cost of Production Issues on Development Teams was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story.

    Originally appeared here:
    Measuring the Cost of Production Issues on Development Teams

    Go Here to Read this Fast! Measuring the Cost of Production Issues on Development Teams