Blog

  • AWS DeepRacer : A Practical Guide to Reducing The Sim2Real Gap — Part 1

    AWS DeepRacer : A Practical Guide to Reducing The Sim2Real Gap — Part 1

    Shrey Pareek, PhD

    AWS DeepRacer : A Practical Guide to Reducing the Sim2Real Gap — Part 1 || Preparing the Track

    Minimize visual distractions to maximize successful laps

    Ever wondered why your DeepRacer performs perfectly in the sim but can’t even navigate a single turn in the real world? Read on to understand why and how to resolve common issues.

    In this guide, I will share practical tips & tricks to autonomously run the AWS DeepRacer around a race track. I will include information on training the reinforcement learning agent in simulation and more crucially, practical advice on how to successfully run your car on a physical track — the so called simulated-to-real (sim2real) challenge.

    In Part 1, I will describe physical factors to keep in mind for running your car on a real track. I will go over the camera sensor (and its limitations) of the car and how to prepare your physical space and track. In later parts, we will go over the training process and reward function best practices. I decided to first focus on physical factors rather than training as understanding the physical limitations before training in simulation is more crucial in my opinion.

    As you will see through this multi-part series, the key goal is to reduce camera distractions arising from lighting changes and background movement.

    The Car and Sensors

    AWS DeepRacer. Image by author.

    The car is a 1/18th scale race car with a RGB (Red Green Blue) Camera sensor. From AWS:

    The camera has 120-degree wide angle lens and captures RGB images that are then converted to grey-scale images of 160 x 120 pixels at 15 frames per second (fps). These sensor properties are preserved in the simulator to maximize the chance that the trained model transfers well from simulation to the real world.

    The key thing to note here is that the camera uses grey-scale images of 160 x 120 pixels. This roughly means that the camera will be good at separating light or white colored pixels from dark or black colored pixels. Pixels that lie between these i.e. greys — can be used to represent additional information.

    The most important thing to remember from this article is the following:

    The car only uses a black and white image for understanding the environment around it. It does not recognize objects — rather it learns to avoid or stick to different grey pixel values (from black to white).

    So all steps that we take, ranging from track preparation to training the model will be executed keeping the above fact in mind.

    In the DeepRacer’s case three color-based basic goals can be identified for the car:

    1. Stay Within White Colored Track Boundary: Lighter or higher pixel values close to the color white (255) will be interpreted as the track boundary by the car and it will try to stay within this pixel boundary.
    2. Drive On Black Colored Track: Darker or lower black (0) pixel values close will be interpreted as driving surface itself, and the car should try to drive on it as much as possible.
    3. Green/Yellow: Although green and yellow colors will be seen as shades of grey by the car — it can still learn to (a) stay close to dotted yellow center line; and (b) avoid solid green out of bounds area.
    Actual camera view (Left) and simulation View (Right) in RGB space. These images are converted to grey scale before inference. Source².

    DeepRacer’s sim2real Performance Gap

    AWS DeepRacer uses Reinforcement Learning (RL)¹ in a simulated environment to train a scaled racecar to autonomously race around a track. This enables the racer to first learn an optimal and safe policy or behavior in a virtual environment. Then, we can deploy our model on the real car and race it around a real track.

    Unfortunately, it is rare to get the exact performance in the real world as that observed in a simulator. This is because the simulation cannot capture all aspects of the real world accurately. To their credit, AWS provides a guide on optimizing training to minimize sim2real gap. Although advice provided here is useful, it did not quite work for me. The car comes with an inbuilt model from AWS that is supposed to be suited for multiple tracks should work out of the box. Unfortunately, at least in my experiments, that model couldn’t even complete a single lap (despite making multiple physical changes). There is missing information in the guides from AWS which I was eventually able to piece together via online blogs and discussion forums.

    Through my experiments, identified the following key factors increasing sim2real gaps:

    1. Camera Light/Noise Sensitivity: The biggest challenge is the camera’s sensitivity to light and/or background noise. Any light hotspot washes out the camera sensors and the car may exhibit unexpected behavior. Try reducing ambient lighting and any background distractions as much as possible. (More on this later.)
    2. Friction: Friction between the car wheels and track adds challenges with calibrating throttle. We purchased the track recommended by AWS through their storefront (read on for why I wouldn’t recommend it). The track is Matte Vinyl, and in my setup I placed it on carpet in my office’s lunch area. It appears that vinyl on carpet creates high static friction causes the car to continuously get stuck especially around slow turns or when attempting to move from a standing start.
    3. Different Sensing Capability of Virtual v/s Real Car: There is a gap in input parameters/state space available to the real v/s simulation car. AWS provides a list of input parameters, but parameters such as track length, progress, steps etc. are only available in simulation and cannot be used by the real car. To the best of my knowledge and through some internet sleuthing — it appears that the car can only access information from the camera sensor. There is a slim chance that parameters such as x,y location and heading of car are known. My research points to this information being unavailable as the car most likely does not have an IMU, and even if it does — IMU based localization is a very difficult problem to solve. This information is helpful in designing the correct reward function (more on that in future parts).

    Track — Build v/s Buy

    As mentioned earlier, I purchased the A To Z Speedway Track recommended by AWS. The track is a simplified version of the Autodroma Nazionale Monza F1 Track in Monza, Italy.

    Track Review — Do Not Buy

    The track is of terribly low quality and I would not recommend buying it. The surface is very creased, flimsy, and highlly reflective. Image by author.

    Personally, I would not recommend buying this track. It costs $760 plus taxes (the car costs almost half that) and is a little underwhelming to say the least.

    1. Reflective Surface: The matte vinyl print is of low quality and highly reflective. Any ambient light washes out the camera and leads to crashes and other unexpected behavior.
    2. Creases: Track is very creased and this causes the car to get stuck. You can fix this to some extent by leaving your track spread out in the sun for a couple of days. I had limited success with this. You can also use a steam iron (see this guide). I did not try this, so please do this at your own risk.
    3. Size: Not really the tracks fault, but the track dimensions are18′ x 27′ which was too large for my house. It couldn’t even fit in my two-car garage. Luckily my office folks were kind enough to let me use the lunch room. It is also difficult very cumbersome to fold and carry.

    Overall, I was not impressed by the quality and would only recommend buying this track if you are short on time or do not want to go through the hassle of building your own.

    Build Your Own Track (If Possible)

    If you can, try to build one on your own. Here is an official guide from AWS and another one from Medium User @autonomousracecarclub which looks more promising.

    Using interlocking foam mats to build track is perhaps the best approach here. This addresses reflectiveness and friction problems of vinyl tracks. Also, these mats are lightweight and stack up easily; so moving and storing them is easier.

    You can also get the track printed at FedEx and stick it on a rubber or concrete surface. Whether you build your own or get it printed, those approaches are better than buying the one recommended by AWS (both financially and performance-wise).

    Preparing Your Space — Lighting and Distractions

    Remember that the car only uses a black and white image to understand and navigate the environment around it. It cannot not recognize objects — rather it learns to avoid or stick to one different shades of grey (from black to white). Stay on black track, avoid white boundaries and green out of bound area.

    The following section outlines the physical setup recommended to make your car drive around the track successfully with minimum crashes.

    Track preparation steps – (a) I reduced ambient lighting by pulling down all blinds and switching off ceiling lights. A couple of lights could not be switched off as they were always on for emergencies. (b) Barriers help reduce background distractions and reflections. Colored barriers work better than black ones. Green barriers are the most effective. I did not have enough green ones so I used them around more difficult turns. Image by author.

    Minimize Ambient Lights

    Try to reduce ambient lighting as much as possible. This includes any natural light from windows and ceiling lights. Of course, you need some light for the camera to be able to see, but lower is better.

    If you cannot reduce lighting, try to make it as uniform as possible. Hotspots of light create more problems than the light itself. If your track is creased up like mine was, hotspots are more frequent and will cause more failures.

    Colorful Interlocking Barriers

    Both the color of the barriers and their placement are crucial. Perhaps a lot more crucial than I had initially anticipated. One might think they are used to protect the car if it crashes. Although that is part of it, barriers are more useful for reducing background distractions.

    I used these 2×2 ft Interlocking Mats from Costco. AWS recommends using atleast 2.5×2.5ft and any color but white. I realized that even black color throws off the car. So I would recommend colorful ones.

    The best are green colored ones since the car learns to avoid green in the simulation. Even though training and inference images are in grey scale, using green colored barriers work better. I had a mix of different colors so I used the green ones around turns where the car would go off track more than others.

    Remember from the earlier section — the car only uses a black and white image for understanding the environment around it. It does not recognize objects around it — rather it learns to avoid or stick to one different shades of grey (from black to white).

    What’s Next?

    In future posts, I will focus on model training tips and vehicle calibration.

    Acknowledgements

    Shout out to Wes Strait for sharing his best practices and detailed notes on reducing the Sim2Real gap. Abhishek Roy and Kyle Stahl for helping with the experiments and documenting & debugging different vehicle behaviors. Finally, thanks to the Cargill R&D Team for letting me use their lunch space for multiple days to experiment with the car and track.

    References

    [1] Sutton, Richard S. “Reinforcement learning: an introduction.” A Bradford Book (2018).

    [2] Balaji, Bharathan, et al. “Deepracer: Educational autonomous racing platform for experimentation with sim2real reinforcement learning.” arXiv preprint arXiv:1911.01562(2019).


    AWS DeepRacer : A Practical Guide to Reducing The Sim2Real Gap — Part 1 was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story.

    Originally appeared here:
    AWS DeepRacer : A Practical Guide to Reducing The Sim2Real Gap — Part 1

    Go Here to Read this Fast! AWS DeepRacer : A Practical Guide to Reducing The Sim2Real Gap — Part 1

  • For Lego, the future is increasingly digital. Pity your inner child

    Thomas Macaulay


    As one of millions of adults who grew up building Lego, the company’s digital adventures distort my childhood memories. Yet my screaming inner infant can’t derail the transition. Our beloved bricks have now been in video games for nearly three decades. Since debuting on Sega Pico in 1995, the Lego games empire has expanded across over 80 titles and 200 million sales. The biggest hits have come from licensing deals. Collaborations with Star Wars, Marvel, and Harry Potter have shifted copious copies — despite their dubious quality. Buoyed by the results, the company has started splurging on games studios. In 2022,…

    This story continues at The Next Web

    Go Here to Read this Fast! For Lego, the future is increasingly digital. Pity your inner child

    Originally appeared here:
    For Lego, the future is increasingly digital. Pity your inner child

  • Opinion: AI’s ability to replace jobs shouldn’t be flaunted

    Ioanna Lykiardopoulou


    AI is here to stay, for better or for worse. In the business world, the exponential use of artificial intelligence has sparked both hopes of unprecedented productivity — and fears of job loss. According to a recent survey  by EY, more than two in three employees in Europe are worried that AI will eliminate jobs. Blunt announcements by prominent European tech companies are doing nothing to help alleviate these concerns. One of those companies is Sweden’s Klarna. The buy-now-pay-later unicorn aims to cut almost half of its workforce thanks to AI. In the company’s second quarter results on Tuesday, CEO…

    This story continues at The Next Web

    Go Here to Read this Fast! Opinion: AI’s ability to replace jobs shouldn’t be flaunted

    Originally appeared here:
    Opinion: AI’s ability to replace jobs shouldn’t be flaunted

  • New Barbie dumbphone could cleave tweens from their screens

    Siôn Geschwindt


    Exactly 65 years since the first Barbie doll was released, the Barbie Phone has finally arrived.  As you might expect, the flip-phone is pink. Very pink. And it comes with all sorts of glittery extras so you can bedazzle it to your heart’s content — and relive some late-90s Barbie nostalgia.  The phone is also dumb. Very dumb. No social media, no apps — just good ol’ fashioned SMS and calls. But that’s the point.   “It is the perfect tool to live your best life and take a vacation from your smartphone,” said its creators, Finnish company Human Mobile Devices…

    This story continues at The Next Web

    Go Here to Read this Fast! New Barbie dumbphone could cleave tweens from their screens

    Originally appeared here:
    New Barbie dumbphone could cleave tweens from their screens

  • This app makes it easier for neurodivergent people to navigate daily tasks

    Ioanna Lykiardopoulou


    Copenhagen-based Tiimo has raised an additional $1.6mn in funding to expand its app supporting neurodivergent individuals in their daily life. Neurodivergent individuals — those with autism, ADHD, dyslexia, and other cognitive differences — make up about 15% to 20% the population. However, tailored tools to boost productivity and daily life management for people who think differently are still insufficient. Tiimo is on a mission to change this. Behind the startup are Helene Lassen Nørlem and Melissa Würtz Azari, who herself has ADHD. The female duo founded the company in 2015 with the aim of developing a neurodivergent tool kit. Tiimo…

    This story continues at The Next Web

    Go Here to Read this Fast! This app makes it easier for neurodivergent people to navigate daily tasks

    Originally appeared here:
    This app makes it easier for neurodivergent people to navigate daily tasks

  • Addigy introduces MDM solutions for controlling Apple Intelligence

    Addigy introduces MDM solutions for controlling Apple Intelligence

    Addigy’s new mobile device management controls empower IT administrators, giving them the capability to test and manage Apple’s upcoming AI features, putting them in control of their systems.

    IT admins can test Apple Intelligence features ahead of launch
    IT admins can test Apple Intelligence features ahead of launch

    The company has launched new Mobile Device Management (MDM) controls designed for Apple Intelligence. The release immediately aims to provide Managed Service Providers (MSPs) and IT administrators with the tools necessary to test and manage the upcoming AI features in Apple’s latest operating systems before publicly releasing them.

    Addigy’s new controls are helpful for organizations that prepare for these AI integrations. By offering pre-zero-day testing, the company allows IT admins and MSPs to experiment with enabling and disabling Apple Intelligence.

    Continue Reading on AppleInsider | Discuss on our Forums

    Go Here to Read this Fast! Addigy introduces MDM solutions for controlling Apple Intelligence

    Originally appeared here:
    Addigy introduces MDM solutions for controlling Apple Intelligence

  • The Mac mini doesn’t sell in huge volumes, but is a crucial part of Apple’s ecosystem

    The Mac mini doesn’t sell in huge volumes, but is a crucial part of Apple’s ecosystem

    The Mac mini, initially an affordable entry into Apple’s ecosystem, has evolved into a versatile machine, though sales data paint a mixed picture of its popularity.

    A Mac mini with an Apple logo on it, accompanied by a black power cable on a white surface.
    Mac mini computer

    First introduced in 2005, the Mac mini was designed as an entry point into the Apple ecosystem. It’s a compact box meant to be paired with peripherals the user already owns.

    Fast-forward nearly two decades, and the Mac Mini is still around, still updated, and still selling — albeit to a very specific group of people.

    Continue Reading on AppleInsider | Discuss on our Forums

    Go Here to Read this Fast!

    The Mac mini doesn’t sell in huge volumes, but is a crucial part of Apple’s ecosystem

    Originally appeared here:

    The Mac mini doesn’t sell in huge volumes, but is a crucial part of Apple’s ecosystem

  • Amazon slashes Apple’s M3 iMac to $1,099, matching record low price

    Amazon slashes Apple’s M3 iMac to $1,099, matching record low price

    Labor Day Apple sales are heating up, with Amazon going all out with its latest M3 iMac 24-inch price drop. Pick up the all-in-one-desktop for just $1,099.99.

    Apple iMac desktop with a Nikon camera and an orange HomePod mini speaker next to a brick wall displaying a circular sign that says 'appleinsider LOWEST PRICES iMACS.'
    Amazon has issued steep price cuts on iMacs.

    The month-end deal applies to Apple’s standard M3 24-inch iMac configuration that features an 8-core CPU and GPU, along with 8GB of RAM and 256GB of storage. The model retails for $1,299, but Amazon has dropped it to $1,099.99 in a bid for your business this Labor Day weekend.

    Buy for $1,099.99

    Continue Reading on AppleInsider

    Go Here to Read this Fast!

    Amazon slashes Apple’s M3 iMac to $1,099, matching record low price

    Originally appeared here:

    Amazon slashes Apple’s M3 iMac to $1,099, matching record low price