There are too many streaming services to keep track of today — and with prices steadily rising, you might be asking yourself if it’s still worthwhile at all to subscribe to these services instead of cable. We share those frustrations but have ultimately decided this oversaturated space offers much more than the basic-cable world that came before it. But now, you have to wade through all of those options and figure out which have the content you want to watch, which fit into your budget, which have the most compelling original series, movies and documentaries, and the best live TV streaming services.
We at Engadget wanted to make that process easier so we’ve compiled a list of the best streaming services you can subscribe to right now, with our favorite picks spanning across all content types and budgets. Should you go out and subscribe to all of the services listed here? Probably not, unless you’re a true cord cutter aching for content. But these are the services that offer the best bang for your buck, regardless of whether you’re a live sports buff, a classic movie lover or a general streaming enthusiast.
This article originally appeared on Engadget at https://www.engadget.com/best-streaming-services-154527042.html?src=rss
Back in 2019, the family of Apple engineer Wei Lun Huang (aka Walter Huang) sued Tesla a year after he was killed when his Model X crashed into a median in Mountain View while Autopilot was engaged. That case is officially closed, now that the automaker has settled the lawsuit on the very day jury selection was supposed to take place. According to CNBC and The New York Times, Tesla’s lawyers asked the court to seal the settlement agreement so that the exact amount the company paid wouldn’t be made public. The company didn’t want “other potential claimants (or the plaintiffs’ bar) [to] perceive the settlement amount as evidence of Tesla’s potential liability for losses, which may have a chilling effect on settlement opportunity in subsequent cases.”
Tesla confirmed shortly after the accident that Autopilot was switched on at the time of the crash, but it also insisted that Huang had time to react and had an unobstructed view of the divider. In a statement to the press, the company insisted that the driver was at fault and that the only way for the accident to have occurred was if Huang “was not paying attention to the road, despite the car providing multiple warnings to do so.” In the lawsuit, Huang’s lawyers pointed to Autopilot marketing materials from Tesla suggesting that its cars are safe enough to use on the road without drivers having to keep their hands on the wheel at all times. We took the image above from a video on Tesla’s Autopilot page, showing a driver with their hands on their lap.
The incident became big enough to attract the attention of the National Transportation Safety Board (NTSB), which conducted an investigation and found that Huang previously reported that the car steered away from the highway on prior trips. In fact, his family said that he used to complain about his car swerving towards the exact barrier he crashed into and had even reported it to the Tesla dealership, which couldn’t replicate the issue. The agency also concluded that Tesla’s collision warning system didn’t alert the driver and that its emergency braking system didn’t activate as it should have when the car started making its way toward the barrier.
That said, the NTSB discovered, as well, that Huang was running a mobile game on his phone at the time of the accident. It just couldn’t determine whether the phone was in his hands when the crash occurred. The Times said Tesla was preparing to show proof to the court that Huang was playing a game when he crashed, which his lawyers denied. Regardless of who’s truly at fault, a trial would’ve called renewed attention to the safety of Tesla’s driver assistance system. Settling puts an end to the case a few months before the company unveils its own robotaxi on August 8.
This article originally appeared on Engadget at https://www.engadget.com/tesla-settles-lawsuit-over-fatal-model-x-crash-that-killed-an-apple-engineer-054710845.html?src=rss
German investment bank Deutsche Bank has published a new survey showing consumers are becoming slightly less skeptical about crypto. Deutsche Bank, a German multinational investment banking giant, has published a survey, indicating a shift in consumer attitudes towards cryptocurrencies. According…
As Solana (SOL) navigates a surprising 6.5% dip, the spotlight shifts to InQubeta’s (QUBE) presale results, raising $13 million and challenging Hedera (HBAR) with its AI integration in blockchain. #partnercontent
Leverage multi-agentic workflows for code testing and debugging
“Two AI bots solving an Error” —Source: Adobe Firefly (Image generated by author)
It’s April 2024 and it’s been about 17 months since we’ve been using LLMs like ChatGPT to aid us in code generation and debugging tasks. While it has added a great level of productivity, there are indeed times when the code generated is full of bugs and makes us take the good ole StackOverflow route.
In this article, I’ll give a quick demonstration on how we can address this lack of “verification” using Conversable Agents offered by AutoGen.
What is AutoGen?
“AutoGen is a framework that enables the development of LLM applications using multiple agents that can converse with each other to solve tasks.”
Presenting LeetCode Problem Solver:
Start with quietly installing autogen:
!pip install pyautogen -q --progress-bar off
I’m using Google Colab so I entered by OPENAI_API_KEY in the Secrets tab, and securely loaded it along with other modules:
import os import csv import autogen from autogen import Cache from google.colab import userdata userdata.get('OPENAI_API_KEY')
I’m using gpt-3.5-turbo only because it’s cheaper than gpt4. If you can afford more expensive experimentation and/or you’re doing things more “seriously”, you should obviously use a stronger model.
llm_config = { "config_list": [{"model": "gpt-3.5-turbo", "api_key": userdata.get('OPENAI_API_KEY')}], "cache_seed": 0, # seed for reproducibility "temperature": 0, # temperature to control randomness }
Now, I’ll copy the problem statement from my favourite LeetCode problem Two Sum. It’s one of the most commonly asked questions in leetcode-style interviews and covers basic concepts like caching using hashmaps and basic equation manipulation.
LEETCODE_QUESTION = """ Title: Two Sum
Given an array of integers nums and an integer target, return indices of the two numbers such that they add up to target. You may assume that each input would have exactly one solution, and you may not use the same element twice. You can return the answer in any order.
Example 1: Input: nums = [2,7,11,15], target = 9 Output: [0,1] Explanation: Because nums[0] + nums[1] == 9, we return [0, 1].
2 <= nums.length <= 104 -109 <= nums[i] <= 109 -109 <= target <= 109 Only one valid answer exists.
Follow-up: Can you come up with an algorithm that is less than O(n2) time complexity? """
We can now define both of our agents. One agent acts as the “assistant” agent that suggests the solution and the other serves as a proxy to us, the user and is also responsible for executing the suggested Python code.
# create an AssistantAgent named "assistant"
SYSTEM_MESSAGE = """You are a helpful AI assistant. Solve tasks using your coding and language skills. In the following cases, suggest python code (in a python coding block) or shell script (in a sh coding block) for the user to execute. 1. When you need to collect info, use the code to output the info you need, for example, browse or search the web, download/read a file, print the content of a webpage or a file, get the current date/time, check the operating system. After sufficient info is printed and the task is ready to be solved based on your language skill, you can solve the task by yourself. 2. When you need to perform some task with code, use the code to perform the task and output the result. Finish the task smartly. Solve the task step by step if you need to. If a plan is not provided, explain your plan first. Be clear which step uses code, and which step uses your language skill. When using code, you must indicate the script type in the code block. The user cannot provide any other feedback or perform any other action beyond executing the code you suggest. The user can't modify your code. So do not suggest incomplete code which requires users to modify. Don't use a code block if it's not intended to be executed by the user. If you want the user to save the code in a file before executing it, put # filename: <filename> inside the code block as the first line. Don't include multiple code blocks in one response. Do not ask users to copy and paste the result. Instead, use 'print' function for the output when relevant. Check the execution result returned by the user. If the result indicates there is an error, fix the error and output the code again. Suggest the full code instead of partial code or code changes. If the error can't be fixed or if the task is not solved even after the code is executed successfully, analyze the problem, revisit your assumption, collect additional info you need, and think of a different approach to try. When you find an answer, verify the answer carefully. Include verifiable evidence in your response if possible.
Additional requirements: 1. Within the code, add functionality to measure the total run-time of the algorithm in python function using "time" library. 2. Only when the user proxy agent confirms that the Python script ran successfully and the total run-time (printed on stdout console) is less than 50 ms, only then return a concluding message with the word "TERMINATE". Otherwise, repeat the above process with a more optimal solution if it exists. """
I set the human_input_mode to “NEVER” because I’m not planning to give any inputs myself and max_consecutive_auto_reply to 4 to limit the back-and-forth turns in the conversation. The Assistant agent has been instructed to respond with the word “TERMINATE” that tells the UserProxyAgent when to conclude the conversation.
Now, the fun time! We’ll initiate the conversation by sending a message from our UserProxyAgent to our Assistant.
One added benefit of using AutoGen (even for non-agentic workflows) is that it provides explicit caching capability to help you save API costs during development. Here, I’m caching responses on the disk but you can also integrate redis for this purpose.
# Use DiskCache as cache with Cache.disk(cache_seed=7) as cache: # the assistant receives a message from the user_proxy, which contains the task description chat_res = user_proxy.initiate_chat( assistant, message="""Solve the following leetcode problem and also comment on it's time and space complexity:nn""" + LEETCODE_QUESTION )
Output:
user_proxy (to assistant):
Solve the following leetcode problem and also comment on it's time and space complexity:
Title: Two Sum
...
Follow-up: Can you come up with an algorithm that is less than O(n2) time complexity?
To solve this problem efficiently, we can use a dictionary to store the elements we have seen so far along with their indices. While iterating through the array, we can check if the complement of the current element (target - current element) is already in the dictionary. If it is, we have found the pair that sums up to the target.
Here's the Python code to solve the Two Sum problem with a time complexity less than O(n^2):
```python # filename: two_sum.py def two_sum(nums, target): num_dict = {} for i, num in enumerate(nums): complement = target - num if complement in num_dict: return [num_dict[complement], i] num_dict[num] = i
# Test the function with the given examples nums1 = [2, 7, 11, 15] target1 = 9 print(two_sum(nums1, target1)) # Output: [0, 1]
The time complexity of this solution is O(n) because we iterate through the list of numbers only once. The space complexity is also O(n) because we store elements in a dictionary.
You can run this code and check if it provides the correct output. Let me know the total run-time, and if it's less than 50 ms, you can reply with "TERMINATE". If it's not, I can suggest a more optimal solution.
The Python script ran successfully and provided the correct output for the given examples. Since the total run-time is not provided, I will suggest measuring the total run-time of the algorithm using the "time" library to determine if it's less than 50 ms.
You can add the functionality to measure the total run-time by importing the "time" library and using it to calculate the elapsed time before and after calling the `two_sum` function. Here's how you can modify the code:
```python # filename: two_sum.py import time
def two_sum(nums, target): num_dict = {} for i, num in enumerate(nums): complement = target - num if complement in num_dict: return [num_dict[complement], i] num_dict[num] = i
# Test the function with the given examples and measure the total run-time nums1 = [2, 7, 11, 15] target1 = 9 start_time = time.time() print(two_sum(nums1, target1)) # Output: [0, 1] end_time = time.time() print("Total run-time:", (end_time - start_time) * 1000, "ms")
Please run this modified code and provide the total run-time for each example. If the total run-time is less than 50 ms for all examples, you can reply with "TERMINATE".
>>>>>>>> EXECUTING CODE BLOCK 0 (inferred language is python)... user_proxy (to assistant):
exitcode: 0 (execution succeeded) Code output: [0, 1] Total run-time: 0.01239776611328125 ms [1, 2] Total run-time: 0.00286102294921875 ms [0, 1] Total run-time: 0.0016689300537109375 ms
The Python script ran successfully, and the total run-time for each example is significantly less than 50 ms. Therefore, you can reply with "TERMINATE". The algorithm provided an efficient solution with a time complexity of O(n) and a space complexity of O(n).
The UserProxyAgent asks the Assistant to solve the problem based on the task description.
The Assistant suggests a solution with a Python block
The UserProxyAgent executes the python code.
The Assistant reads the console output and responds back with a modified solution (with time measurement functionality. Honestly, I would’ve expected this modified solution right away but this behavior can be tuned through prompt engineering or by employing a stronger LLM).
With AutoGen, you can also display the cost of the agentic workflow.
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
Cookie
Duration
Description
cookielawinfo-checkbox-analytics
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional
11 months
The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy
11 months
The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.