Tag: AI

  • Improving Code Quality During Data Transformation with Polars

    Improving Code Quality During Data Transformation with Polars

    Nikolai Potapov

    Image created with AI by Dall-E

    In our daily lives as Data/Analytic Engineers, writing ETL/ELT workflows and pipelines (or perhaps your company uses a different term) is a routine and integral part of our work. However, in this article, I will focus only on the Transformation stage. Why? Because at this stage, data from various sources and of different types acquires business significance for the company. This stage is very important and also incredibly delicate, as an error can instantly mislead the user, causing them to lose trust in your data.

    To illustrate the process of improving code quality, let’s consider a hypothetical example. Imagine a website where we log user actions, such as what they viewed and purchased. We’ll have user_id for the user ID, product_id for the product, action_type for the type of action (either a view or purchase), and action_dt for the action timestamp.

    from dataclasses import dataclass
    from datetime import datetime, timedelta
    from random import choice, gauss, randrange, seed
    from typing import Any, Dict

    import polars as pl

    seed(42)
    base_time= datetime(2024, 8, 9, 0, 0, 0, 0)

    user_actions_data = [
    {
    "user_id": randrange(10),
    "product_id": choice(["0001", "0002", "0003"]),
    "action_type": ("purchase" if gauss() > 0.6 else "view"),
    "action_dt": base_time - timedelta(minutes=randrange(100_000)),
    }
    for x in range(100_000)
    ]

    user_actions_df = pl.DataFrame(user_actions_data)

    Additionally, for our task, we’ll need a product catalog, which in our case will include only product_id and its price (price). Our data is now ready for the example.

    product_catalog_data = {"product_id": ["0001", "0002", "0003"], "price": [10, 30, 70]}
    product_catalog_df = pl.DataFrame(product_catalog_data)

    Now, let’s tackle our first task: creating a report that will contain the total purchase amount and the ratio of the number of purchased items to viewed items from the previous day for each user. This task isn’t particularly complex and can be quickly implemented. Here’s how it might look using Polars:

    yesterday = base_time - timedelta(days=1)
    result = (
    user_actions_df.filter(pl.col("action_dt").dt.date() == yesterday.date())
    .join(product_catalog_df, on="product_id")
    .group_by(pl.col("user_id"))
    .agg(
    [
    (
    pl.col("price")
    .filter(pl.col("action_type") == "purchase")
    .sum()
    ).alias("total_purchase_amount"),
    (
    pl.col("product_id").filter(pl.col("action_type") == "purchase").len()
    / pl.col("product_id").filter(pl.col("action_type") == "view").len()
    ).alias("purchase_to_view_ratio"),
    ]
    )
    .sort("user_id")
    )

    This is a working solution that could be deployed to production, some might say, but not us since you’ve opened this article. At the beginning, I emphasized that I would focus specifically on the transformation step.

    If we think about the long-term maintenance of this code, testing, and remember that there will be hundreds of such reports, we must recognize that each subsequent developer will understand this code less than the previous one, thereby increasing the chances of errors with every change.

    I would like to reduce this risk, and that’s why I’ve come to the following approach:

    Step 1: Let’s separate all the business logic into a distinct class, such as DailyUserPurchaseReport.

    @dataclass
    class DailyUserPurchaseReport:

    Step 2: Let’s define the arguments this class should accept: sources – various sources we need for our work, and params – variable parameters that may change, in our case, this could be the report date.

    @dataclass
    class DailyUserPurchaseReport:

    sources: Dict[str, pl.LazyFrame]
    params: Dict[str, Any]

    Step 3: Define a method that will perform the transformation, for example, execute.

    @dataclass
    class DailyUserPurchaseReport:

    sources: Dict[str, pl.LazyFrame]
    params: Dict[str, Any]

    def execute(self) -> pl.DataFrame:
    pass

    Step 4: Break down the entire process into separate functions that accept a pl.LazyFrame and also return a pl.LazyFrame.

    @dataclass
    class DailyUserPurchaseReport:

    sources: Dict[str, pl.LazyFrame]
    params: Dict[str, Any]

    def _filter_actions_by_date(self, frame: pl.LazyFrame) -> pl.LazyFrame:
    pass

    def _enrich_user_actions_from_product_catalog(self, frame: pl.LazyFrame) -> pl.LazyFrame:
    pass

    def _calculate_key_metrics(self, frame: pl.LazyFrame) -> pl.LazyFrame:
    pass

    def execute(self) -> pl.DataFrame:
    pass

    Step 5: Now, use the magic function pipe to connect our entire pipeline together. This is precisely why we use pl.LazyFrame everywhere:

        def execute(self) -> pl.DataFrame:
    result: pl.DataFrame = (
    self.sources["user_actions"]
    .pipe(self._filter_actions_by_date)
    .pipe(self._enrich_user_actions_from_product_catalog)
    .pipe(self._calculate_key_metrics)
    .collect()
    )
    return result

    It is recommended to use LazyFrame when piping operations, in order to fully take advantage of query optimization and parallelization.

    Final code:

    @dataclass
    class DailyUserPurchaseReport:
    """
    Generates a report containing the total purchase amount and the ratio of purchased items
    to viewed items from the previous day for each user.

    Attributes:
    sources (Dict[str, pl.LazyFrame]): A dictionary containing the data sources, including:
    - 'user_actions': A LazyFrame containing user actions data.
    - 'product_catalog': A LazyFrame containing product catalog data.
    params (Dict[str, Any]): A dictionary containing parameters, including:
    - 'report_date': The date for which the report should be generated (previous day).
    """

    sources: Dict[str, pl.LazyFrame]
    params: Dict[str, Any]

    def _filter_actions_by_date(self, frame: pl.LazyFrame) -> pl.LazyFrame:
    """
    Filters user actions data to include only records from the specified date.

    Args:
    frame (pl.LazyFrame): A LazyFrame containing user actions data.

    Returns:
    pl.LazyFrame: A LazyFrame containing user actions data filtered by the specified date.
    """
    return frame.filter(pl.col("action_dt").dt.date() == self.params["report_date"])

    def _enrich_user_actions_from_product_catalog(
    self, frame: pl.LazyFrame
    ) -> pl.LazyFrame:
    """
    Joins the user actions data with the product catalog to include product prices.

    Args:
    frame (pl.LazyFrame): A LazyFrame containing user actions data.

    Returns:
    pl.LazyFrame: A LazyFrame containing user actions data enriched with product prices.
    """
    return frame.join(self.sources["product_catalog"], on="product_id")

    def _calculate_key_metrics(self, frame: pl.LazyFrame) -> pl.LazyFrame:
    """
    Calculates the total purchase amount and the ratio of purchased items to viewed items.

    Args:
    frame (pl.LazyFrame): A LazyFrame containing enriched user actions data.

    Returns:
    pl.LazyFrame: A LazyFrame containing the total purchase amount and purchase-to-view ratio for each user.

    """
    return (
    frame.group_by(pl.col("user_id"))
    .agg(
    [
    (
    pl.col("price")
    .filter(pl.col("action_type") == "purchase")
    .sum()
    ).alias("total_purchase_amount"),
    (
    pl.col("product_id")
    .filter(pl.col("action_type") == "purchase")
    .len()
    / pl.col("product_id").filter(pl.col("action_type") == "view").len()
    ).alias("purchase_to_view_ratio"),
    ]
    )
    .sort("user_id")
    )

    def execute(self) -> pl.DataFrame:
    """
    Executes the report generation process.

    This method performs the following steps:
    1. Filters user actions data to include only records from the previous day.
    2. Joins the filtered user actions data with the product catalog.
    3. Calculates the total purchase amount and purchase-to-view ratio for each user.
    4. Returns the final report as a DataFrame.

    Returns:
    pl.DataFrame: A DataFrame containing the total purchase amount and purchase-to-view ratio for each user.
    """
    result: pl.DataFrame = (
    self.sources["user_actions"]
    .pipe(self._filter_actions_by_date)
    .pipe(self._enrich_user_actions_from_product_catalog)
    .pipe(self._calculate_key_metrics)
    .collect()
    )
    return result

    Let’s check the execution:

    # prepare sources
    user_actions: pl.LazyFrame = user_actions_df.lazy()
    product_catalog: pl.LazyFrame = product_catalog_df.lazy()

    # get report date
    yesterday: datetime = base_time - timedelta(days=1)

    # report calculation
    df: pl.DataFrame = DailyUserPurchaseReport(
    sources={"user_actions": user_actions, "product_catalog": product_catalog},
    params={"report_date": yesterday},
    ).execute()

    Result:

    ┌─────────┬───────────────────────┬────────────────────────┐
    │ user_id ┆ total_purchase_amount ┆ purchase_to_view_ratio │
    │ --- ┆ --- ┆ --- │
    │ i64 ┆ i64 ┆ f64 │
    ╞═════════╪═══════════════════════╪════════════════════════╡
    │ 0 ┆ 1880 ┆ 0.422018 │
    │ 1 ┆ 1040 ┆ 0.299065 │
    │ 2 ┆ 2220 ┆ 0.541667 │
    │ 3 ┆ 1480 ┆ 0.436782 │
    │ 4 ┆ 1240 ┆ 0.264463 │
    │ 5 ┆ 930 ┆ 0.254717 │
    │ 6 ┆ 1080 ┆ 0.306122 │
    │ 7 ┆ 1510 ┆ 0.345133 │
    │ 8 ┆ 2050 ┆ 0.536842 │
    │ 9 ┆ 1320 ┆ 0.414414 │
    └─────────┴───────────────────────┴────────────────────────┘

    Bonus

    For those using Test-Driven Development (TDD), this approach is especially beneficial. TDD emphasizes writing tests before the actual implementation. By having clearly defined, small functions, you can write precise tests for each part of the transformation process, ensuring that each function behaves as expected. This not only makes the process smoother but also ensures that your transformations are thoroughly validated at each step.

    Conclusion

    In this article, I have outlined a structured approach to improving code quality in your data workflows using Polars. By isolating the transformation step and breaking down the process into distinct, manageable parts, we ensure that our code is both robust and maintainable. Through the use of pl.LazyFrame and the pipe function, we take full advantage of Polars capabilities for query optimization and parallelization. This method not only enhances the efficiency of our data transformations but also ensures the integrity and business relevance of the data we work with. By following these steps, you can create more reliable and scalable data workflows, ultimately leading to better data-driven decision-making.

    Share Your Experience

    If you have experience or useful tips, share your opinion in the comments. It’s always interesting to learn the experiences of other developers.


    Improving Code Quality During Data Transformation with Polars was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story.

    Originally appeared here:
    Improving Code Quality During Data Transformation with Polars

    Go Here to Read this Fast! Improving Code Quality During Data Transformation with Polars

  • Vision Transformers, Contrastive Learning, Causal Inference, and Other Deep Dives You Shouldn’t Miss

    TDS Editors

    Feeling inspired to write your first TDS post? We’re always open to contributions from new authors.

    As many of us are entering the final stretch of summer, why not take advantage of the calmer weeks before a typically hectic September kicks in and explore new topics in data science and machine learning?

    To help all the learners and skill-growers among our readers, this week we’re presenting a special edition of The Variable, dedicated entirely to our best recent deep dives (and other articles that demand a bit more time and focus than usual). Their reading time might be longer, but they do a fantastic job covering their respective topics with nuance, care, and an eye towards practical applications. We hope you enjoy our selection.

    • A Practical Guide to Contrastive Learning
      Useful for learning underlying data representations without any explicit labels, contrastive learning comes with numerous real-world use cases; Mengliu Zhao guides us through the process of building a SimSiam model using the example of the FashionMNIST dataset.
    • Paper Walkthrough: Vision Transformer (ViT)
      We’re always in the mood for a solid, thorough paper analysis—and even more so when it covers a groundbreaking concept like vision transformers. If you’re new to this topic or would like to expand your existing knowledge of ViT, don’t miss Muhammad Ardi’s debut TDS article.
    • Speeding Up the Vision Transformer with BatchNorm
      Let’s stay with the vision transformer for a bit longer: if you’re already familiar with it but could use some help making your workflows more efficient and streamlined, Anindya Dey, PhD provides a comprehensive guide to integrating batch normalization into an encoder-only transformer architecture, leading to reduced training and inference time.
    • Enhancing E-Commerce with Generative AI — Part 1
      Some of the promised benefits of recently released AI tools remain to be seen. Mina Ghashami presents a new series that focuses on use cases where generative-AI applications are already poised to make a real impact, starting with one of the most common (and business-critical) tasks for e-commerce platforms: product recommendations.
    Photo by Nellie Adamyan on Unsplash
    • Causal Inference with Python: A Guide to Propensity Score Matching
      Bringing theory and practice together, Lukasz Szubelak invites us to explore the ins and outs of causal inference in his patient deep dive, which focuses on propensity score matching as a powerful technique for estimating treatment effects in non-randomized settings.
    • ChatGPT vs. Claude vs. Gemini for Data Analysis (Part 1)
      ML practitioners are facing an increasingly difficult choice when deciding which LLM-powered products to choose. Yu Dong’s new series aims to bring clarity to an occasionally chaotic ecosystem by comparing the performance of three major offerings (ChatGPT, Claude, and Gemini) in essential data-analysis tasks—in this case, writing SQL queries.
    • Omitted Variable Bias
      Reading Sachin Date’s math and statistics explainers is always a highlight for us—and his latest, on “one of the most frequently occurring, and easily missed, biases in regression studies” is no exception. We invite you to explore his deep dive on the omitted variable bias, which also outlines several approaches for analyzing and estimating its effects.

    Thank you for supporting the work of our authors! We love publishing articles from new authors, so if you’ve recently written an interesting project walkthrough, tutorial, or theoretical reflection on any of our core topics, don’t hesitate to share it with us.

    Until the next Variable,

    TDS Team


    Vision Transformers, Contrastive Learning, Causal Inference, and Other Deep Dives You Shouldn’t Miss was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story.

    Originally appeared here:
    Vision Transformers, Contrastive Learning, Causal Inference, and Other Deep Dives You Shouldn’t Miss

    Go Here to Read this Fast! Vision Transformers, Contrastive Learning, Causal Inference, and Other Deep Dives You Shouldn’t Miss

  • Introducing document-level sync reports: Enhanced data sync visibility in Amazon Q Business

    Introducing document-level sync reports: Enhanced data sync visibility in Amazon Q Business

    Aneesh Mohan

    Amazon Q Business is a fully managed, generative artificial intelligence (AI)-powered assistant that helps enterprises unlock the value of their data and knowledge. With Amazon Q, you can quickly find answers to questions, generate summaries and content, and complete tasks by using the information and expertise stored across your company’s various data sources and enterprise […]

    Originally appeared here:
    Introducing document-level sync reports: Enhanced data sync visibility in Amazon Q Business

    Go Here to Read this Fast! Introducing document-level sync reports: Enhanced data sync visibility in Amazon Q Business

  • Multi-Agent-as-a-Service — A Senior Engineer’s Overview

    Multi-Agent-as-a-Service — A Senior Engineer’s Overview

    Saman (Sam) Rajaei

    Multi-Agent-as-a-Service — A Senior Engineer’s Overview

    There has been much discussion about AI Agents — pivotal self-contained units capable of performing tasks autonomously, driven by specific instructions and contextual understanding. In fact, the topic has become almost as widely discussed as LLMs. In this article, I consider AI Agents and, more specifically, the concept of Multi-Agents-as-a-Service from the perspective of the lead engineers, architects, and site reliability engineers (SREs) that must deal with AI agents in production systems going forward.

    Context: What Problems Can AI Agents Solve?

    AI agents are adept at tasks that benefit from human-friendly interactions:

    1. E-Commerce: agents powered by technologies like LLM-based RAG or Text-to-SQL respond to user inquiries with accurate answers based on company policies, allowing for a more tailored shopping experience and customer journey that can revolutionize e-commerce
    2. Customer Service: This is another ideal application. Many of us have experienced long waits to speak with representatives for simple queries like order status updates. Some startups — Decagon for example — are making strides in addressing these inefficiencies through AI agents.
    3. Personalized Product and Content Creation: a prime example of this is Wix — for low-code or no-code website building, Wix developed a chatbot that, through interactive Q&A sessions, creates an initial website for customers according to their description and requirements.

    “Humans set goals, but an AI agent independently chooses the best actions it needs to perform to achieve those goals.

    Overall, LLM-based agents would work great in mimicking natural human dialogue and simple business workflows, often producing results that are both effective and impressively satisfying.

    An Engineer’s View: AI Agents & Enterprise Production Environments

    Considering the benefits mentioned, have you ever wondered how AI agents would function within enterprise production environments? What architecture patterns and infrastructure components best support them? What do we do when things inevitably go wrong and the agents hallucinate, crash or (arguably even worse) carry out incorrect reasoning/planning when performing a critical task?

    As senior engineers, we need to carefully consider the above. Moreover, we must ask an even more important question: how do we define what a successful deployment of a multi-agent platform looks like in the first place?

    To answer this question, let’s borrow a concept from another software engineering field: Service Level Objectives (SLOs) from Reliability Engineering. SLOs are a critical component in measuring the performance and reliability of services. Simply put, SLOs define the acceptable ratio of “successful” measurements to “all” measurements and their impact on the user journeys. These objectives help us determine the required and expected levels of service from our agents and the broader workflows they support.

    So, how are SLOs relevant to our AI Agent discussion?

    Using a simplified view, let’s consider two important objectives — “Availability” and “Accuracy” — for the agents and identify some more granular SLOs that contribute to these:

    1. Availability: this refers to the percentage of requests that receive some successful response (think HTTP 200 status code) from the agents or platform. Historically, the uptime and ping success of the underlying servers (i.e. temporal measures) were key correlated indicators of availability. But with the rise of Micro-services, notional uptime has become less relevant. Modern systems instead focus on the number of successful versus unsuccessful responses to user requests as a more accurate proxy for availability. Other related metrics can be thought of as Latency and Throughput.
    2. Accuracy: this, on the other hand, is less about how quickly and consistently the agents return responses to the clients, but rather how correctly, from a business perspective, they are able to perform their tasks and return data without a human present in the loop to verify their work. Traditional systems also track similar SLOs such as data correctness and quality.

    The act of measuring the two objectives above normally occurs through submission of internal application metrics at runtime, either at set time intervals (e.g. every 10 minutes), or in response to events (user requests, upstream calls etc.). Synthetic probing, for instance, can be used to mimic user requests, trigger relevant events and monitor the numbers. The key idea to explore here is this: traditional systems are deterministic to a large extent and, therefore, it’s generally more straightforward to instrument, probe and evaluate them. On the other hand, in our beautiful yet non-deterministic world of GenAI agents, this is not necessarily the case.

    Note: the focus of this post is more so on the former of our two objectives – availability. This includes determining acceptance criteria that sets up baseline cloud/environmental stability to help agents respond to user queries. For a deeper dive into accuracy (i.e. defining sensible task scope for the agents, optimizing performance of few-shot methods and evaluation frameworks), this blog post acts as a wonderful primer.

    Planning for Agents

    Now, back to the things engineers need to get right to ensure infrastructure reasiness when deploying agents. In order to achieve our target SLOs and provide a reliable and secure platform, senior engineers consistently take into account the following elements:

    1. Scalability: when number of requests increase (suddenly at times), can the system handle them efficiently?
    2. Cost-Effectiveness: LLM usage is expensive, so how can we monitor and control the cost?
    3. High Availability: how can we keep the system always-available and responsive to customers? Can agents self-heal and recover from errors/crashes?
    4. Security: How can we ensure data is secure at rest and in transit, perform security audits, vulnerability assessments, etc.?
    5. Compliance & Regulatory: a major topic for AI, what are the relevant data privacy regulations and other industry-specific standards to which we must adhere?
    6. Observability: how can we gain real-time visibility into AI agents’ activities, health, and resource utilization levels in order to identify and resolve problems before they impact the user experience?

    Sound familiar? These are similar to the challenges that modern web applications, Micro-services pattern and Cloud infrastructure aim to address.

    So, now what? We propose an AI Agent development and maintenance framework that adheres to best-practices developed over the years across a range of engineering and software disciplines.

    Multi-Agent-as-a-Service (MAaaS)

    This time, let us borrow some of best-practices for cloud-based applications to redefine how agents are designed in production systems:

    • Clear Bounded Context: Each agent should have a well-defined and small scope of responsibility with clear functionality boundaries. This modular approach ensures that agents are more accurate, easier to manage and scale independently.
    • RESTful and Asynchronous Inter-Service Communication: Usage of RESTful APIs for communication between users and agents, and leveraging message brokers for asynchronous communication. This decouples agents, improving scalability and fault tolerance.
    • Isolated Data Storage per Agent: Each agent should have its own data storage to ensure data encapsulation and reduce dependencies. Utilize distributed data storage solutions where necessary to support scalability.
    • Containerization and Orchestration: Using containers (e.g. Docker) to package and deploy agents consistently across different environments, simplifying deployment and scaling. Employ container orchestration platforms like Kubernetes to manage the deployment, scaling, and operational lifecycle of agent services.
    • Testing and CI/CD: Implementing automated testing (unit, integration, contract, and end-to-end tests) to ensure the reliable change management for agents. Use CI tools to automatically build and test agents whenever code changes are committed. Establish CD pipelines to deploy changes to production seamlessly, reducing downtime and ensuring rapid iteration cycles.
    • Observability: Implementing robust observability instrumentation such as metrics, tracing and logging for the agents and their supporting infrastructure to build a real-time view of the platform’s reliability (tracing could be of particular interest here if a given user request goes through multiple agents). Calculating and tracking SLO’s and error budgets for the agents and the aggregate request flow. Synthetic probing and efficient Alerting on warnings and failures to make sure agent health issues are detected before widely impacting the end users.

    By applying these principles, we can create a robust framework for AI agents, transforming the concept into “Multi-Agent as a Service” (MAaaS). This approach leverages the best-practices of cloud-based applications to redefine how agents are designed, deployed, and managed.

    Image by the author

    The agent plays a critical role in business operations; however, it does not function in isolation. A robust infrastructure supports it, ensuring it meets production expectations, with some key components:

    • Service-Oriented Architecture: Design agents as services that can be easily integrated into existing systems.
    • API Gateway: Use an API gateway to manage and secure traffic between clients and agents.
    • Elastic Infrastructure: Utilize cloud infrastructure that can elastically scale resources up or down based on demand.
    • Managed Services: Take advantage of managed services for databases, vector stores, messaging, and machine learning to reduce operational overhead.
    • Centralized Monitoring: Use centralized monitoring solutions (e.g., CloudWatch, Prometheus, Grafana) to track the health and performance of agents.

    To highlight this, we will demo a simple multi-agent system: a debate platform.

    Example: Multi-Agent Debate System

    We’ve crafted a multi-agent debate system to demonstrate MAaaS in action. The debate topic is AI’s impact on the job market. The setup features three agents:

    • Team A, which supports AI’s benefits for jobs
    • Team B, which holds the opposing view
    • The Host, which manages the debate, ending it after eight rounds or when discussions become redundant.

    Focusing on system architecture, we use PhiData to create the agents, deploying them via AWS Elastic Kubernetes Service (EKS) for high availability. Agent activities are monitored with AWS CloudWatch, and EKS’s service discovery ensures agents communicate seamlessly. Crucially, conversation histories are stored in a database, allowing any backup agent to continue discussions without interruption in case of failures. This resilience is bolstered by a message queue that ensures data integrity by processing messages only when fully consumed. To maintain dialogue flow, each agent is limited to a single replica for now, though Kubernetes would ensure that desired state is always maintained in case of the pod going down.

    Image by the author

    To enable users to try the system locally, we’ve created a MiniKube deployment YAML. In this simplified version, we’ve eliminated the postgres database. Instead, each agent will temporarily store its conversation history in memory. This adjustment keeps the system lightweight and more accessible for local deployment, while still showcasing the essential functionalities. You would need to install MiniKube, Ollama and kubectl on your system first.

    Save the above in a file called deploy.ymland run:

    $ minikube start
    $ ollama run llama3
    $ kubectl apply -f deploy.yml

    To start debate (minikube behaves slightly differently on Linux-based vs. Windows systems):

    $ kubectl get pods
    $ kubectl exec <host-pod-name> -- curl -X GET 'http://localhost:8080/agent/start_debate'

    To get the debate history:

    $ kubectl exec <host-pod-name> -- curl -X GET 'http://localhost:8080/agent/chat-history'

    To tear down the resource:

    $ kubectl delete -f .minikube-deploy.yml

    The agents proceed to have an outstanding debate (see debate output in the appendix below).

    Conclusion

    The interest in multi-agent systems opens up numerous possibilities for innovation and efficiency. By leveraging cloud-native principles and best practices, we can create scalable, cost-effective, secure, and highly available multi agent systems. The MAaaS paradigm not only aligns with modern software engineering principles but also paves the way for more sophisticated and production-ready AI applications. As we continue to explore and refine these concepts, the potential for multi agent system to revolutionize various industries becomes increasingly promising.

    Note: this article was written as a collaboration between Sam Rajaei and Guanyi Li.

    Appendix: Debate Output

    Thank you for your attention, till next time!

    Image by the author


    Multi-Agent-as-a-Service — A Senior Engineer’s Overview was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story.

    Originally appeared here:
    Multi-Agent-as-a-Service — A Senior Engineer’s Overview

    Go Here to Read this Fast! Multi-Agent-as-a-Service — A Senior Engineer’s Overview

  • Derive generative AI-powered insights from ServiceNow with Amazon Q Business

    Derive generative AI-powered insights from ServiceNow with Amazon Q Business

    Prabhakar Chandrasekaran

    This post shows how to configure the Amazon Q ServiceNow connector to index your ServiceNow platform and take advantage of generative AI searches in Amazon Q. We use an example of an illustrative ServiceNow platform to discuss technical topics related to AWS services.

    Originally appeared here:
    Derive generative AI-powered insights from ServiceNow with Amazon Q Business

    Go Here to Read this Fast! Derive generative AI-powered insights from ServiceNow with Amazon Q Business