Originally appeared here:
Two affordable OTC glucose monitors were just unveiled at CES – and you can try them now
Tag: tech
-
Two affordable OTC glucose monitors were just unveiled at CES – and you can try them now
These over-the-counter GCMs from health brands Dexcom and Abbot make glucose monitoring affordable and accessible. Here’s how they can help you – no prescription necessary. -
The best laptops of CES 2025
CES 2025 has been a huge year for laptops of all varieties. Here were the seven that really stood out.Go Here to Read this Fast! The best laptops of CES 2025
Originally appeared here:
The best laptops of CES 2025 -
This CES 2025 power bank doubles as a Wi-Fi hotspot, and I love it
You probably have a power bank for your phone. But what about one that also doubles as a Wi-Fi hotspot? Enter the Baseus EnerGeek.Go Here to Read this Fast! This CES 2025 power bank doubles as a Wi-Fi hotspot, and I love it
Originally appeared here:
This CES 2025 power bank doubles as a Wi-Fi hotspot, and I love it -
Rictor’s Skyrider X1 is equal parts moped, quadcopter and fantasy
According to Wikipedia, the first instance of the phrase “post-truth” was written by Steve Teisch in 1992 when referencing political scandals post-Watergate. Clearly, ol’ Stevie never visited CES, where the standards for saying things that are provably true are slightly laxer than in the rest of civil discourse. Apropos of nothing, a company called Rictor, which makes and sells one e-bike, the Rictor K1, is advertising the Skyrider X1. A moped-cum-quadcopter that you can use to zoom through the streets one second, and through the skies another. Which, as you all know, is a totally achievable thing for any consumer electronics company to be able to achieve by its promised launch date of 2026.
The Skyrider X1, its theoretical makers claim, is an electric moped with an enclosed cabin that, when things get too congested, will transform into a quadcopter. All you’ll need to do is pop out the four arms, each with two fanblades, and you’ll be able to ascend up to a maximum of 200 meters above the ground. Rictor says safety is its top priority, including plenty of redundant systems and, should all else fail, a built-in parachute. Plus, the Skyrider X1 is capable of automatically taking off and landing, and can plan its optimal route when it’s up in the air. And on the company’s website, it says the X1 SL, with a 10.5kWh battery will have a flight time of 25 minutes, while the X1 SX, with its 21kWh battery, will stay in the air for 40 minutes.
That’s pretty exciting, not to mention the company says that it’s aiming to sell the Skyrider X1 for $60,000, far below what you might expect to pay for a mop-copter in this class. You could buy one and use it to speed up your DoorDash deliveries and earn some sweet money in tips. Perhaps, when the pre-order page opens, you can lay down that cash before heading over to my new venture, where I’ll sell you a bridge. Seriously, one of London’s many bridges, that you’ll own, all to yourself, but you will need to arrange delivery and pay for shipping with a third party I haven’t yet invented.
This article originally appeared on Engadget at https://www.engadget.com/transportation/rictors-skyrider-x1-is-equal-parts-moped-quadcopter-and-fantasy-220802108.html?src=rss
Go Here to Read this Fast! Rictor’s Skyrider X1 is equal parts moped, quadcopter and fantasy
Originally appeared here:
Rictor’s Skyrider X1 is equal parts moped, quadcopter and fantasy -
Spit on this stick to see how burned out you are
Stress can really take a toll on your body and mind, often in ways you may not immediately realize. Swiss startup Nutrix AG is hoping a quick, at-home spit test can help by giving users a better idea of how stressed out they really are — and tools to manage it.
At CES 2025, Nutrix showed off its cortiSense device that’s designed to measure levels of cortisol in saliva and can be used to track changes over time. The startup aims to launch it by the end of this year, and it’ll work with the gSense app and digital platform to offer things like personalized wellness coaching from a medical team.
It’s meant to be an easy and noninvasive way to identify and combat burnout. The part that’s a little sus, though? In a press release, Nutrix CEO Maria Hahn said the company is focusing on “empowering enterprises,” noting that employee burnout can present “a significant challenge with a huge human and financial cost.” So, get your stress under control to better perform labor, I guess.
I wasn’t able to pop one in my mouth and try it out (I did ask), but the Nutrix team says a reading should take about 3-5 minutes to complete. The device, which looks like a vape, uses disposable tabs that have a cortisol measuring sensor. “You get the quantitative information of the cortisol in saliva,” which is then “transmitted over to the digital health platform to combine with other data, like activity monitoring, glucose [and] weight,” said Nutrix co-founder and CTO Dr. Jemish Parmar at CES’s Unveiled event. You’re supposed to take four measurements a day.
Cheyenne MacDonald for EngadgetThe company didn’t share pricing information, but the team says it will be offered as part of a subscription program that would include the cortiSense device, the single-use sensors and the digital health platform. The gSense platform so far offers guidance around weight loss, but it will soon offer mental health services too, according to Dr. Dominika Sulot, the Data and Software Lead. “Once you have all the data, you’re scheduling an appointment with [the medical team] and then they’re providing you the personalized plan,” Sulot says.
For personal use, this kind of thing could be great if it works as stated, especially if it would connect users with physical and mental health support. But I’m not loving the emphasis on enterprise applications to, per the press release, “foster a healthier, more productive workforce.” Actually, I might have just vomited in my mouth a little writing that. I wonder what cortiSense would detect in that.
This article originally appeared on Engadget at https://www.engadget.com/home/spit-on-this-stick-to-see-how-burned-out-you-are-024531311.html?src=rss
Go Here to Read this Fast! Spit on this stick to see how burned out you are
Originally appeared here:
Spit on this stick to see how burned out you are -
CES 2025: The Lenovo Legion Go S is the first third-party SteamOS handheld
The Lenovo Legion Go is sort of like the SUV of gaming handhelds. It’s big, beefy, comes with a lot of extra equipment like detachable controllers and it supports vertical mouse functionality that lets it adapt to all sorts of situations. All of that versatility is great, but it makes the device kind of bulky. But for CES 2025, Lenovo is announcing a slightly more portable version called the Legion Go S with support for not one but two different OSes: Windows 11 and SteamOS.
That said, the specs on both variants are nearly identical. They feature either an AMD Ryzen Z2 Go chip or the Z1 Extreme APU Lenovo used on the previous model, with up to 32GB of RAM, 1TB SSD and a 55.5Wh battery. You also get a microSD card slot for expandable storage, two USB 4 ports and a 3.5mm audio jack. The main difference is their color (and release date, but more on that later) as the Windows 11 Legion Go S comes in white while the SteamOS model will be available in black.
Compared to the original Legion Go, the S features a smaller but still large 8-inch 120 Hz OLED display (down from 8.8 inches) with a 1,920 x 1,200 resolution and VRR instead of 2,560 x 1,600 144Hz panel like on the original. It also doesn’t have detachable controllers or a kickstand. The benefit of this is that the whole system feels much sturdier, which should make you feel better about tossing it in a bag before your next trip. It’s also noticeably lighter at 1.6 pounds versus 1.9 for its older sibling.
Notably, you still get analog sticks with Hall Effect sensors, which you don’t get on rivals like ASUS’ pricey ROG Ally X. Lenovo also moved to a new pivot-style D-pad, though I’m not sure that counts as a true upgrade as I tend to prefer the classic cross-style ones. Another nice bonus for tinkerers is that on the inside, the Go S comes with a shorter 2242 SSD module even though it can accommodate desktop-size 2280 sticks.
Initially, I got a chance to check out the Windows 11 version, whose performance felt quite snappy thanks to the drop in resolution to 1,920 x 1,200, which feels like a more suitable match for its components. Lenovo has also made some improvements to its Legion Space app, so it functions much better as a general game launch and a place to tweak performance and settings. I also appreciate little touches like how even though it’s much smaller, the Legion Go S still has a touchpad in front, which is such a huge help when you need to exit Legion Space and navigate around in Windows. I’d even say that despite its size, the pad on the Go S is more responsive, as it feels more like a trackball than a tiny touchpad. And around back, there’s a small toggle for adjusting how far you can pull the shoulder buttons.
As for the Legion Go S powered by SteamOS, I found it remarkable how similar it felt to the Steam Deck despite not being made by Valve. The UI is almost identical, the only differences are some subtle tweaks Lenovo added to support things like the handheld’s RGB lighting and higher 30-watt TDP. In person, the SteamOS models’ casing looks more like a dark purple than pure black, which is a nice subtle touch. However my biggest takeaways is that Valve’s OS felt slightly more responsive that it does on the Steam Deck, which I’m attributing to the Legion’s newer APU.
The small hiccup is that a higher-end version of Legion Go S running Windows 11 is expected to go on sale first sometime later this month starting at $730 with an AMD Z2 Go processor, 32GB of RAM and 1TB of storage. Unfortunately, that means anyone who wants one of the more affordable models with 16GB of RAM or running SteamOS will have to wait a bit longer, as those variants won’t be available until May. On the bright side, the Legion Go S powered by SteamOS will have a lower starting price of $499 compared to an equivalent Windows model which will start at $599.
This article originally appeared on Engadget at https://www.engadget.com/gaming/pc/ces-2025-the-lenovo-legion-go-s-is-the-first-third-party-steamos-handheld-160001642.html?src=rss
Originally appeared here:
CES 2025: The Lenovo Legion Go S is the first third-party SteamOS handheld -
Sony Honda Mobility CES 2025 keynote: Learn more about the Afeela 1 EV live here
Afeela is coming back for a curtain call. After dominating the Sony press conference on the opening night of CES 2025, the debut EV from Sony Honda Mobility (a joint venture between the two Japanese concerns) is getting its own breakout event today.
What to expect at Afeela’s CES 2025 press conference
We know a lot more about the Afeela 1 than we did 24 hours ago, thanks to Sony’s earlier presser. The Afeela 1 Origin and Afeela 1 Signature are priced at $89,900 and $109,900, respectively. Customers in California are now able to reserve a Signature trim for a refundable fee of $200 and the first deliveries are planned for mid-2026. The Origin variant is set to arrive the following year. Both variants factor in three years of access to services including Level 2+ driver assistance, the Afeela Personal Agent and a range of entertainment options.
At the Afeela keynote, we should learn much more about the Afeela 1. Expect a closer look at a near-final version of the EV, which is packed with tech.
Watch the Afeela CES 2025 livestream
You can watch the Afeela CES 2025 press conference live right here. The keynote starts Tuesday, January 7 at 7:30PM ET.
This article originally appeared on Engadget at https://www.engadget.com/transportation/evs/sony-honda-mobility-ces-2025-keynote-learn-more-about-the-afeela-1-ev-live-here-212536780.html?src=rss
Originally appeared here:
Sony Honda Mobility CES 2025 keynote: Learn more about the Afeela 1 EV live here -
Everything NVIDIA CEO Jensen Huang announced at its CES 2025 keynote
NVIDIA held its CES 2025 keynote last night with CEO Jensen Huang and it was surprisingly eventful. The company finally unveiled its much awaited GeForce RTX 5000 GPUs that promise a considerable performance uplift, to start with. The company didn’t stop there, also announcing Project Digits, a personal AI supercomputer, along with DLSS 4 and more. Here’s a wrap-up of what happened — and you can watch the whole event uncut, via the YouTube embed below. (Spoiler alert: It’s more than 90 minutes long.)
NVIDIA RTX 5000-series GPUs
Huang strode out in a new snakeskin-like leather jacket and revealed the much-anticipated RTX 5090 GPU. With 32GB of GDDR7 RAM and an impressive 21,760 CUDA cores, the new flagship can deliver up to twice as much relative performance, particularly for ray-tracing (RT) intensive games like Cyberpunk 2077. In fact that particular title ran at 234 fps with full RT on in a video demo, compared to 109 fps on the RTX 4090. It’s not cheap, though, priced at $1,999.
The company also revealed the $549 RTX 5070 with a far more modest 6,144 CUDA cores and 12GB of DDR7 RAM, along with the $749 RTX 5070 Ti and $999 RTX 5080.
DLSS 4
A key part of the RTX 5000-series launch was the introduction of DLSS 4, the latest version of the company’s real-time image upscaling technology. It features a new technology called Multi Frame Generation that allows the new GPUs to generate up to three additional frames for every one frame the GPU produces via traditional rendering — helping multiply frame rates by up to eight times. It also represents what NVIDIA calls the “biggest upgrade to its AI models” since DLSS 2, improving things like temporal stability and detail, while reducing artifacts like ghosting.
Project Digits
Finally, NVIDIA launched Project Digits, a “personal AI supercomputer” designed for AI researchers, data scientists and students. It uses NVIDIA’s new GB10 Grace Blackwell superchip, providing up to a petaflop of performance for testing and running AI models. The company says a single Project Digits unit can run models 200 billion parameters in size, or multiple machines can be linked together to run up to 405 billion parameter models. And for its intended audience, Project Digits is relatively cheap at $3,000.
On top of all that, the company introduced NVIDIA Cosmos world foundation models for robot and AV development, the NVIDIA DRIVE Hyperion AV platform for autonomous vehicles and AI Foundation models for RTX PCs “that supercharge digital humans.” It’s all explained in the video above and NVIDIA’s CES 2025 keynote blog.
NVDA stock price seesaw
CES — and Huang’s keynote — are happening against the backdrop of continued volatility in the company’s stock price. NVIDIA shares (ticker NVDA) spiked ahead of Huang’s address, closing on Monday just shy of Apple’s market cap pinnacle. But Tuesday saw a reversal, with the stock down more than 6 percent. Still, some are betting it’s a toss up between the two tech giants as to which will hit the $4 trillion market valuation first.
Update, January 7 2025, 4:18PM ET: This story has been updated with new details on Nvidia’s stock price.
This article originally appeared on Engadget at https://www.engadget.com/ai/everything-nvidia-ceo-jensen-huang-announced-at-its-ces-2025-keynote-174947827.html?src=rss
Go Here to Read this Fast! Everything NVIDIA CEO Jensen Huang announced at its CES 2025 keynote
Originally appeared here:
Everything NVIDIA CEO Jensen Huang announced at its CES 2025 keynote -
Lenovo makes surprise move by launching first business PC with Snapdragon CPU
The Lenovo ThinkCentre neo 50q QC is small but mighty.Originally appeared here:
Lenovo makes surprise move by launching first business PC with Snapdragon CPU -
How To Learn Math for Machine Learning, Fast
Marina Wyss – Gratitude Driven
Even with zero math background
Photo by Antoine Dautry on Unsplash Do you want to become a Data Scientist or machine learning engineer, but you feel intimidated by all the math involved? I get it. I’ve been there.
I dropped out of High School after 10th grade, so I never learned any math beyond trigonometry in school. When I started my journey into Machine Learning, I didn’t even know what a derivative was.
Fast forward to today, and I’m an Applied Scientist at Amazon, and I feel pretty confident in my math skills.
I’ve picked up the necessary math along the way using free resources and self-directed learning. Today I’m going to walk you through some of my favorite books, courses, and YouTube channels that helped me get to where I am today, and I’ll also share some tips on how to study effectively and not waste your time struggling and being bored.
Do You Even Need to Know Math for ML?
First, let’s address a common question: Do you even really need to know the math to work in ML?
The short answer is: it depends on what you want to do.
For research-heavy roles where you’re creating new ML algorithms, then yes, you obviously need to know the math. But if you’re asking yourself if you need to learn math, chances are that’s not the kind of job you’re looking for…
But for practitioners — most of us in the industry — you can often be totally competent without knowing all the underlying details, especially as a beginner.
At this point, libraries like numpy, scikit-learn, and Tensorflow handle most of the heavy lifting for you. You don’t need to know the math behind gradient descent to deploy a model to production.
If you’re a beginner trying to get into ML, in my opinion it is not strategic to spend a bunch of time memorizing formulas or studying linear algebra — you should be spending that time building things. Train a simple model. Explore your data. Build a pipeline that predicts something fun.
That said, there are moments where knowing the math really helps. Here are a few examples:
Imagine you’re training a model and it’s not converging. If you understand concepts like gradients and optimization functions, you’ll know whether to adjust your learning rate, try a different optimizer, or tweak your data preprocessing.
Or, let’s say you’re running a linear regression, and you’re interpreting the coefficients. Without math knowledge, you might miss problems like multicollinearity, which makes those coefficients unreliable. Then you make incorrect conclusions from the data and cost the company millions and lose your job! Just kidding. Kind of. We do need to be careful when making business decisions from the models we build.
So, while you can (and should) get started without deep math knowledge, it’s definitely still reasonable to build your comfort with math over time.
Once you’re hands-on, you’ll start encountering problems that naturally push you to learn more. When you need to debug or explain your results, that’s when the math will start to click, because it’s connected to real problems.
So seriously, don’t let the fear of math stop you from starting. You don’t need to learn it all upfront to make progress. Get your hands dirty with the tools, build your portfolio, and let math grow as a skill alongside your practical knowledge.
What to Learn
Alright, now let’s talk about what to learn when you’re building your math foundation for Machine Learning jobs.
First, linear algebra.
Linear algebra is fundamental for Machine Learning, especially for deep learning. Many models rely on representing data and computations as matrices and vectors. Here’s what to prioritize:
- Matrices and Vectors: Think of matrices as grids of numbers and vectors as lists. Data is often stored this way, and operations like addition, multiplication, and dot products are central to how models process that information.
- Determinants and Inverses: Determinants tell you whether a matrix can be inverted, which is used in optimization problems and solving systems of equations.
- Eigenvalues and Eigenvectors: These are key to understanding variance in data and are the foundation of techniques like Principal Component Analysis, which helps reduce dimensionality in datasets.
- Lastly, Matrix Decomposition: Methods like Singular Value Decomposition (SVD) are used in recommendation systems, dimensionality reduction, and data compression.
Now we’re on to basic calculus.
Calculus is core to understanding how models learn from data. But, we don’t need to worry about solving complex integrals — it’s just about grasping a few key ideas:
- First, derivatives and gradients: Derivatives measure how things change, and gradients (which are multidimensional derivatives) are what power optimization algorithms like gradient descent. These help models adjust their parameters to minimize error.
- The Chain Rule is central to neural networks. It’s how backpropagation works — which is the process of figuring out how much each weight in the network contributes to the overall error so the model can learn effectively.
- Lastly, optimization basics: Concepts like local vs. global minima, saddle points, and convexity are important to understand why some models get stuck and others find the best solutions.
Lastly, statistics and probability.
Statistics and probability are the bread and butter of understanding data. While they’re more associated with data science, there’s definitely a lot of value for ML as well. Here’s what you need to know:
- Distributions: Get familiar with common ones like normal, binomial, and uniform. The normal distribution, in particular, pops up everywhere in data science and ML.
- Variance and covariance: Variance tells you how spread out your data is, while covariance shows how two variables relate. These concepts are really important for feature selection and understanding your data’s structure.
- Bayes’ Theorem: While it has kind of an intimidating name, Bayes’ theorem is a pretty simple but powerful tool for probabilistic reasoning. It’s foundational for algorithms like Naive Bayes — big surprise — which is used for things like spam detection, as well as for Bayesian optimization for hyperparameter tuning.
- You’ll also want to understand Maximum Likelihood Estimation (MLE), which helps estimate model parameters by finding values that maximize the likelihood of your data. It’s a really fundamental concept in algorithms like logistic regression.
- Finally, sampling and conditional probability: Sampling lets you work with subsets of data efficiently, and conditional probability is essential for understanding relationships between events, especially in Bayesian methods.
Now, this is definitely not exhaustive, but I think it’s a good overview of the common concepts you’ll need to know to do a good job as a data scientist or MLE.
Next up, I’ll share the best resources to learn these concepts without it being stressful or overwhelming.
Resources
Personally, I would highly recommend starting with a visual and intuitive understanding of the key concepts before you start reading difficult books and trying to solve equations.
For Linear Algebra and Calculus, I cannot speak highly enough about 3blue1brown’s Essence of Linear Algebra and Essence of Calculus series. These videos give a solid introduction to what is actually being measured and manipulated when we use these mathematical approaches. More importantly, they show, let’s say, the beauty in it? It’s strange to say that math videos could be inspirational, but these ones are.
For statistics and probability, I am also a huge fan of StatQuest. His videos are clear, engaging, and just a joy to watch. StatQuest has playlists with overviews on core stats and ML concepts.
So, start there. Once you have a visual intuition, you can start working through more structured books or courses.
There are lots of great options here. Let’s go through a few that I personally used to learn:
I completed the Mathematics for Machine Learning Specialization from Imperial College London on Coursera when I was just starting out. The specialization is divided into three courses: Linear Algebra, Multivariate Calculus, and a last one on Principal Component Analysis. The courses are well-structured and include a mix of video lectures, quizzes, and programming assignments in Python. I found the course to be a bit challenging as a beginner, but it was a really good overview and I passed with a bit of effort.
DeepLearning.AI also recently released a Math for ML Specialization on Coursera. This Specialization also has courses on Linear Algebra and Calculus, but instead of PCA the final course focuses on Stats and Probability. I’m personally working through this Specialization right now, and overall I’m finding it to be another really great option. Each module starts with a nice motivation for how the math connects to an applied ML concept, it has coding exercises in Python, and some neat 3D tools to mess around with to get a good visual understanding of the concepts.
If you prefer learning from books, I have some suggestions there too. First up, if you like anime or nerdy stuff, oh boy do I have a recommendation for you.
Did you know they have manga math books?
The Manga Guide to Linear Algebra
These are super fun. I can’t say that the instructional quality is world-class or anything, but they are cute and engaging, and they made me not dread reading a math book.
The next level up would be “real” math books. These are some of the best:
The Mathematics for Machine Learning ebook by Deisenroth and colleagues is a great comprehensive resource available for free for personal use. It covers key topics we’ve already discussed like Linear Algebra, Calculus, Probability, and Optimization, with a focus on how these concepts apply to machine learning algorithms. It’s relatively beginner-friendly and is generally regarded as one of the best books for learning this material.
Next, Practical Statistics for Data Scientists is another well-loved resource that includes code examples in Python and R.
How to Study
Now, before we actually start studying, I think it’s important to spend a little bit of time thinking really deeply about why you even want to do this. Personally, I find that if I’m studying just because I feel like I “should,” or because it’s some arbitrary assignment, I get distracted easily and don’t actually retain much.
Instead, I try to connect to a deeper motivation. Personally, right now I have a really basic motivation: I want to earn a lot of money so that I can take care of everyone I love. I have this opportunity to push myself and make sure everyone is safe and cared for, now and in the future. This isn’t to put extra pressure on myself, but actually just a way that works for me to get excited that I have this opportunity to learn and grow and hopefully help others along the way. Your motivation might be totally different, but whatever it is, try to tie this work to a larger goal.
In terms of strategies for optimizing your study time, I have found that one of the most effective methods is writing notes in my own words. Don’t just copy definitions or formulas — take time to summarize concepts as if you were explaining them to someone else — or, to future you. For example, if you’re learning about derivatives, you might write, “A derivative measures how a function changes as its input changes.” This forces you to actively process the material.
Relatedly, when it comes to math formulas, don’t just stare at them — translate them into plain English — or whatever spoken language you prefer. For instance, take the equation y=mx+b: you might describe m as “the slope that shows how steep the line is,” and b as “the point where the line crosses the y-axis.” So, the final formula, might be, “The value of y (the output) is determined by taking the slope (m), multiplying it by x (the input), and then adding b (the starting point where the line intersects the y-axis).”
You can even use your notes as like a personal blog. Writing short posts about what you’ve learned is a really solid way to clarify your understanding, and teaching others (even if no one reads it) solidifies the material in your own mind. Plus, sharing your posts on Medium or LinkedIn not only potentially helps others but also allows you to build a portfolio showcasing your learning journey.
Also trust me, when it’s interview time you’ll be happy you have these notes! I use my own study notes all the time.
This next piece of advice I have might not be super fun, but I also recommend not using just one resource. Personally I’ve had a lot of success from taking many different courses, and kind of throwing all my notes together at first. Then, I’ll write a blog like I was just talking about that summarizes all of my learnings.
There are a couple of advantages to this approach: First, repetition helps you retain things. If I see a concept multiple times, explained from multiple angles, I’m much more likely to actually get what’s going on and remember that for longer than a day. Plus, not only do I see the information presented to me multiple times, I’m writing the concepts out in my own words multiple times, including that final time where I synthesize it all and get it ready to share with others — so I have to be really confident I actually got it by the end.
Finally, once you’ve built that foundation and get to the level of math where you can actually use it for stuff, I really recommend coding concepts from scratch. If you can code gradient descent or logistic regression using just numpy, you’re off to a really strong start.
Again, Math (Probably) Won’t Get You a Job
While I know at this point you’re super excited to start learning math, I do want to just circle back to the important fact that if you’re a beginner trying to get your first job, in my opinion math should not be the first thing you prioritize.
It is really unlikely that your math skills are what will get you a job as a data scientist or MLE.
Instead, prioritize gaining hands-on experience by working on projects and actually building stuff. Employers are far more interested in seeing what you can do with the tools and knowledge you already have than how many formulas you’ve memorized.
As you encounter challenges in your work, you’ll naturally be motivated to learn the math behind the algorithms. Remember, math is a tool to help you succeed, and shouldn’t be a barrier to getting started.
—
If you want more advice on how to break into data science, you can download a free 80+ page e-book on how to get your first data science job (learning resources, project ideas, LinkedIn checklist, and more): https://gratitudedriven.com/
Or, check out my YouTube channel!
Finally, just a heads up, there are affiliate links in this post. So, if you buy something I’ll earn a small commission, at no additional cost to you. Thank you for your support.
How To Learn Math for Machine Learning, Fast was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story.
Originally appeared here:
How To Learn Math for Machine Learning, FastGo Here to Read this Fast! How To Learn Math for Machine Learning, Fast