I just couldn’t get myself to write yet another out-of-office message, so I developed an AI-powered app to make my digital presence felt while I was off on a month-long holiday.
TLDR; This article outlines the development of a web-app for crafting non-traditional out-of-office emails with Generative AI. Should you choose to peruse this article, dear reader, you will be rewarded with a large number of words strung together to describe how I designed and built a Python app that utilizes GPT-4 and DALL-E 3 for generating auto-reply messages with accompanying pictures. If, on the other hand, you just want to try out the app yourself, I got your back; the source code is up for grabs on GitHub.
Embracing Generative AI for Auto-Replies
Out-of-office emails are bland.
There, I’ve said it.
But do they have to be this way? Why not use Large Language Models to sprinkle some flair into these messages? And for that extra pizzazz, why not use Text-to-Image models to generate snazzy images that accompany these texts?
With generative models just a few API calls away, making auto-reply emails pop couldn’t be easier. In this article, I’ll show you how. We’ll take Python and Open AI’s API for a test drive. We’ll create whimsical yet professional out-of-office emails and integrate them into Outlook. And for those who prefer to dive straight into the code, it’s all there for you on GitHub.
GitHub – sheikhomar/roll: ROLL: A Gen AI-based app for adding flair to your auto-reply messages
What is the Endgame here?
As any self-respecting software engineer knows, we can’t get into the technical wizardry before we know what we want to achieve.
Goal 1: We want to get an LLM to whip up Danish texts that tickle the funny bone. Not only that, it would be neat to pair these messages with relevant images, because who would not appreciate cartoonish images in their inbox?
Goal 2: Quality is vital. Especially since these auto-reply messages are sent from my work inbox. Outputs from current AI models are far from perfect. Even powerful LLMs like GPT-4 occasionally trip over the grammar rules in Danish. We need make sure that the generated content is both correct and appropriate.
Goal 3: Anyone who wrestled with Outlook knows the absurd number of clicks it takes to manually change out-of-office settings. Automation is not just a buzzword here, but as necessary as a cup of Espresso on a Monday morning.
Goal 4: Even if generative models cooked up the perfect auto-reply email, the magic will soon wear off, if the same text is served every time. So, we want the emails to change frequently. Either every time an auto-reply is sent or on a schedule, say, every 24 hours.
In this article, we’ll focus on goals 1 to 3. We’ll save the scheduling part of the project for a future article because it can get quite complicated and deserves its own write-up.
How to Stack Code Blocks Without Toppling the Tower?
Requirements gathering: check. Time to code, right?
Wrong!
We can’t just start typing away like caffeinated code monkeys. We need to think about how to structure our code. We are engineers, after all.
A quick drawing on a blank endpaper of the book C# 4.0 The Complete Reference — which, by the way, is an excellent book for adjusting the height of computer monitors — yields a structure with three ‘layers’:
At the bottom layer are the components that deal with external systems:
- Generate AI API is a unit of abstraction for the generative models. In this project, we’ll rely on OpenAI’s models. Specifically, GPT-4 for drafting the texts and DALL-E 3 for the visuals in our auto-replies.
- Data Repository serves as our digital library to store our creations. We’ll keep things simple and store everything as files on disk.
- Outlook Client is our interface with Microsoft Outlook. It allows us to set the out-of-office settings programmatically, thereby automating what would otherwise be a click-fest worthy of a League of Legends tournament.
In the middle layer, we have a service layer, which contains the components that perform all the heavy lifting:
- Content Generator churns out texts, but also generates images that accompany these texts. It relies on the Generative AI component to deliver its output.
- File Downloader is necessary as DALL-E’s creations have a shelf-life of just 24 hours. This component downloads these fleeting masterpieces off the internet before they vanish into thin air.
- Image Optimizer trims the excess bytes off of images generated by DALL-E. This can be done by resizing them and maybe applying a quantization algorithm. The idea is to make emails containing images faster to transmit over the wire.
- HTML Creator is responsible for formatting a given text message and an optimized image as HTML text ready to be sent as an auto-reply email.
Capping it all off at the top layer is the User Interface, our command-and-control center, where we oversee everything. Here, we ensure that when the LLM decides to produce prose containing made-up words that almost seem to be, but not entirely, Danish, we can step in and save the day.
The UI also allows us to generate new images based on the text. And most importantly, this is where we can configure the out-of-office settings in Outlook with a click of a button — freeing up precious seconds for other rather enjoyable cognitive endeavors such as optimizing exact k-nearest-neighbor search algorithms.
Now that we have laid out the software design , it’s time for coding.
Let’s roll!
How to Build the Tower Brick by Brick?
This section details in broad strokes how to implement each component described in the previous section. The aim is not to explain each line of code in detail but to provide enough context to clarify the intention of what is happening in the code. The comments in the code and naming should cover the rest. If that is not the case, you are welcome to direct a comment my way.
Generative AI API
To implement the Generative AI component, we can go down the well-trodden path of utilizing libraries like LangChain and OpenAI’s official Python SDK. Instead, let’s veer off onto another, more well-structured trail and use AIConfig.
Interestingly, AIConfig emphasizes on managing the Generative AI parts of a system through configuration files. This should strike a chord with senior software engineers. By decoupling the behavior of the AI from the application code, we get a more maintainable codebase, which is a cornerstone of high-quality software engineering. Plus, the config-driven approach structures our experiments and allows us to tweak the prompts faster without changing our code.
If this piqued your curiosity, check out Sarmad’s insightful article:
With AIConfig, the code for interacting with the AI becomes refreshingly simple. We just need to instantiate an instance of AIConfigRuntime from a configuration file, and then issue calls to the appropriate model using named settings:
import anyio
from aiconfig import AIConfigRuntime, InferenceOptions
from pathlib import Path
async def main():
# Create an AIConfigRuntime object from a config file
config_path = Path("config/auto-reply-content-gen.aiconfig.json")
runtime: AIConfigRuntime = AIConfigRuntime.load(config_path)
# Run inference using the prompt named `generate-text`
inference_options = InferenceOptions(stream=False)
msg = await runtime.run("generate-text", options=inference_options)
# Done!
print(f"Generated message: {msg}")
if __name__ == "__main__":
anyio.run(main)
Banking on AIConfig in our project, the Generative AI component boils down to a few lines of code. For this reason, we are not going to write a custom wrapper code for this component, as we would have, if we were implementing it with LangChain. Less headache and no need to hurl choice words at LangChain for its convoluted design and shaky abstractions. Another delightful upside of using AIConfig is that we don’t have to roll out our own configuration logic e.g. with Hydra.
Data Repository
Data Repository ensures that our content can be stored and retrieved reliably as files on disk. It uses a data class named AutoReplyRecord to organize the information and JSON as the serialization format. Our implementation DataRepository exposes CRU operations, i.e., the standard CRUD operations but without allowing for deletion:
import shutil
import uuid
from datetime import datetime
from pathlib import Path
from typing import List, Optional
import aiofiles
from pydantic import BaseModel, Field
from roll.utils import utcnow
class DataRepository:
"""Represents a file-based data repository."""
def __init__(self, data_dir: Path) -> None:
"""Initializes a new instance of the DataRepository class.
Args:
data_dir (Path): The directory to store data in.
"""
self._data_dir = data_dir
if not self._data_dir.exists():
self._data_dir.mkdir(parents=True, exist_ok=True)
async def get_keys(self) -> List[str]:
"""Returns the keys of existing records."""
file_names = [
file_path.name
for file_path in self._data_dir.iterdir()
if file_path.is_dir()
]
return file_names
async def get_all(self) -> List[AutoReplyRecord]:
"""Returns existing records.
Returns:
List[AutoReplyRecord]: A list of all records order by created time.
"""
keys = await self.get_keys()
records = [await self.get(key=key) for key in keys]
sorted_records = sorted(records, key=lambda r: r.created_at, reverse=True)
return list(sorted_records)
async def create(
self, ai_config_path: Path, html_template_path: Path
) -> AutoReplyRecord:
"""Create a new record.
Args:
ai_config_path (Path): The path to the AI Config file.
html_template_path (Path): The path to the HTML template file.
Returns:
AutoReplyRecord: The newly created record.
"""
key = uuid.uuid4().hex
dir_path = self._data_dir / key
dir_path.mkdir(parents=True, exist_ok=True)
new_ai_config_path = dir_path / ai_config_path.name
shutil.copyfile(src=ai_config_path, dst=new_ai_config_path)
new_html_template_path = dir_path / html_template_path.name
shutil.copyfile(src=html_template_path, dst=new_html_template_path)
record = AutoReplyRecord(
key=key,
dir=dir_path,
ai_config_path=new_ai_config_path,
html_template_path=new_html_template_path,
)
await self.save(record=record)
return record
async def save(self, record: AutoReplyRecord) -> None:
"""Save the given record to disk.
Args:
record (AutoReplyRecord): The record to save.
"""
file_path = record.dir / RECORD_FILE_NAME
async with aiofiles.open(file_path, mode="w") as f:
await f.write(record.to_json(indent=2))
async def get(self, key: str) -> Optional[AutoReplyRecord]:
"""Finds a record by its key.
Args:
key (str): The key to search for.
Returns:
Optional[AutoReplyRecord]: The record if found, None otherwise.
"""
file_path = self._data_dir / key / RECORD_FILE_NAME
if not file_path.exists():
return None
async with aiofiles.open(file_path, mode="r") as f:
json_data = await f.read()
return AutoReplyRecord.from_json(json_data=json_data)
Outlook Client
Automating Outlook becomes child’s play, when you have a tool like exchangelib. It is a Python library that lets you interact with the Microsoft Exchange API like a champ. An excellent piece of software that we’ll use in this project.
For this particular app, we just want to play with Outlook’s out-of-office settings. Therefore, we’ll put together a wrapper class that provides two pieces of functionality; backing up the current out-of-office settings and applying new settings.
import json
from base64 import b64encode
from datetime import datetime, timedelta, timezone
from pathlib import Path
from typing import cast
import aiofiles
from exchangelib import Account, Credentials, OofSettings
from exchangelib.ewsdatetime import EWSDateTime
class OutlookAutoReplyClient:
"""Represents a client for interacting with Outlook's out-of-office settings."""
def __init__(self, login_name: str, password: str, account_name: str) -> None:
"""Initializes a new instance of the OutlookAutoReplyClient class.
Args:
login_name (str): The login name of the Outlook account.
password (str): The password of the Outlook account.
account_name (str): The name of the Outlook account.
"""
credentials = Credentials(username=login_name, password=password)
self._account = Account(
account_name, credentials=credentials, autodiscover=True
)
async def backup_to_json_file(self, output_path: Path) -> None:
"""Backup Outlook's current out-of-office settings to disk.
Args:
output_path (Path): The location where to store the backup.
"""
oof = cast(OofSettings, self._account.oof_settings)
start_at = cast(EWSDateTime, oof.start)
end_at = cast(EWSDateTime, oof.end)
settings = {
"state": oof.state,
"start": start_at.ewsformat(),
"end": end_at.ewsformat(),
"external_audience": oof.external_audience,
"internal_reply": oof.internal_reply,
"external_reply": oof.external_reply,
}
# Save settings to disk as JSON
async with aiofiles.open(output_path, "w") as file:
json_content = json.dumps(settings, indent=2)
await file.write(json_content)
async def set_internal_reply(self, html_content: str) -> None:
"""Sets the internal auto-reply message for a month.
Args:
html_content (str): The message to set as the internal auto-reply.
"""
start_at = datetime.now(tz=timezone.utc) - timedelta(days=1)
end_at = datetime.now(tz=timezone.utc) + timedelta(days=5)
print(f"Setting internal auto-reply message from {start_at} to {end_at}...")
self._account.oof_settings = OofSettings(
state=OofSettings.ENABLED,
external_audience="None",
internal_reply=html_content,
external_reply="-", # Cannot be empty string or None!
start=start_at,
end=end_at,
)
Content Generator
Now, to the heart of our project: the Content Generator. We need generate two types of content. First, we craft an auto-reply text using GPT-4. Then, we let DALL-E 3 generate an image incorporating elements from the generated text.
Unfortunately, DALL-E 3 has a limit on its prompt. To work around this, we use GPT-4 to craft a concise prompt for DALL-E 3 that incorporates aspects extracted from a given text.
This process requires three distinct calls to the AI models:
- generate-text asks GPT-4 to concoct new text that can be used in an auto-reply email.
- generate-dall-e-prompt calls GPT-4, prompting it to come up with a prompt designed specifically for DALL-E 3 based on our generated text from the first call. This is a bit meta, like writing code to generate code.
- generate-image asks DALL-E 3 to generate an image that should accompany the auto-reply message. Here, we use prompt generated by the generate-dall-e-prompt call.
We’ll let AIConfig orchestrate the entire process. For that, it needs a configuration file that describes how to achieve our desired result. We create a configuration file that contains three named prompts:
{
"name": "auto-reply-content-generator",
"description": "Configuration for generating content for auto-reply messages.",
"schema_version": "latest",
"metadata": {
"model_parsers": {
"gpt-4-1106-preview": "gpt-4"
}
},
"prompts": [
{
"name": "generate-text",
"input": "Write an auto-reply message in Danish following the same structure of earlier messages.",
"metadata": {
"model": {
"name": "gpt-4-1106-preview",
"settings": {
"model": "gpt-4-1106-preview",
"max_tokens": 1000,
"temperature": 0.1,
"system_prompt": "You're a renowned expert at crafting witty and engaging out-of-office replies in Danish. [...]"
}
}
}
},
{
"name": "generate-dall-e-prompt",
"input": "Generate a prompt for DALL-E 3 to create an illustration that complements the following out-of-office message:n{{auto_reply_message}}",
"metadata": {
"model": {
"name": "gpt-4-1106-preview",
"settings": {
"model": "gpt-4-1106-preview",
"max_tokens": 1000,
"temperature": 0.1,
"system_prompt": "You are an expert prompt engineer for the image generation model: DALL-E 3. [...]"
}
},
"parameters": {
"auto_reply_message": "Parameter for the auto-reply message."
}
}
},
{
"name": "generate-image",
"input": "{{dall_e_prompt}}",
"metadata": {
"model": {
"name": "dall-e-3",
"settings": {
"model": "dall-e-3",
"size": "1792x1024",
"quality": "standard"
}
},
"parameters": {
"dall_e_prompt": "Parameter for the DALL-E prompt."
}
}
}
]
}
It is relatively straightforward to read the configuration file, once you understand the configuration schema:
AIConfig Specification | AIConfig
Next, we construct a class that exposes two main methods: generate_message and generate_image. Inside these methods, we inject our configuration file into AIConfig and let it perform its magic:
from pathlib import Path
from aiconfig import AIConfigRuntime, InferenceOptions
class AutoReplyContentGenerator:
"""Represents a class that generates content for auto-reply messages."""
def __init__(self, config_file_path: Path, output_dir: Path, verbose: bool) -> None:
"""Initializes a new instance of the AutoReplyContentGenerator class.
Args:
config_file_path (Path): The path to the AI Config file to use.
output_dir (Path): The directory to save outputs to.
verbose (bool): Whether to print debug messages to stdout.
"""
if not config_file_path.exists():
raise ValueError(f"File {config_file_path} not found")
self._output_path: Path = output_dir / config_file_path.name
self._runtime: AIConfigRuntime = AIConfigRuntime.load(config_file_path)
self._verbose = verbose
async def generate_message(self) -> str:
"""Generates an auto-reply message.
Returns:
str: The generated message.
"""
inference_options = InferenceOptions(stream=False)
if self._verbose:
print("Running inference for prompt 'generate-text'...")
auto_reply_message = await self._runtime.run_and_get_output_text(
prompt_name="generate-text",
options=inference_options,
)
self._save_outputs()
print(f"Generated auto-reply message:n{auto_reply_message}n")
return auto_reply_message
async def generate_image(self, auto_reply_message: str) -> str:
"""Generates an image to accompany the given auto-reply message.
Args:
auto_reply_message (str): The auto-reply message to use as inspiration for the image generation.
Returns:
str: The URL of the generated image.
"""
if self._verbose:
print("Running inference for prompt 'generate-dall-e-prompt'...")
inference_options = InferenceOptions(stream=False)
dall_e_prompt = await self._runtime.run_and_get_output_text(
prompt_name="generate-dall-e-prompt",
options=inference_options,
params={
"auto_reply_message": auto_reply_message,
},
)
self._save_outputs()
if self._verbose:
print(f"Generated prompt for DALL-E:n{dall_e_prompt}n")
print("Running inference for prompt 'generate-image'...")
image_url = await self._runtime.run_and_get_output_text(
prompt_name="generate-image",
options=inference_options,
params={
"dall_e_prompt": dall_e_prompt,
},
)
self._save_outputs()
if self._verbose:
print(f"Generated image URL:n{image_url}n")
return image_url
def _save_outputs(self) -> None:
"""Saves the outputs of the models to a JSON file."""
self._runtime.save(
json_config_filepath=str(self._output_path),
include_outputs=True,
)
File Downloader
Just like there are numerous programming language, there are a plethora of ways to download a file from the internet with Python. But as a wise engineer may say: “Choose the tool as you would choose your programming language, carefully and with regard to your project’s requirements”. We’ll roll with the asynchronous aiohttp library paired with tqdm for that satisfying visual feedback of download progress, because why the hell not:
from pathlib import Path
from typing import Optional
from urllib.parse import unquote, urlparse
import aiofiles
import aiohttp
from tqdm.asyncio import tqdm_asyncio
class FileDownloader:
"""Represents a class that downloads files."""
def __init__(
self,
output_dir: Path,
verify_ssl: bool,
verbose: bool,
download_chunk_size: int = 1024,
) -> None:
"""Initializes a new instance of the FileDownloader class.
Args:
output_dir (Path): The directory to save downloaded files to.
verify_ssl (bool): Whether to verify SSL certificates when downloading files.
download_chunk_size (int, optional): The size of each chunk to download. Defaults to 1024.
"""
self._http = aiohttp.ClientSession(
connector=aiohttp.TCPConnector(ssl=verify_ssl) # type: ignore
)
self._chunk_size = download_chunk_size
self._output_dir = output_dir
self._output_dir.mkdir(parents=True, exist_ok=True)
self._verbose = verbose
async def download_one(
self, url: str, local_file_name: Optional[str] = None
) -> Path:
"""Downloads an file from a URL and stores it on local disk.
Args:
url (str): The URL to download the file from.
local_file_name (Optional[str], optional): The name of the file to save the downloaded file to. Defaults to None.
Returns:
Path: The location of the downloaded file in the local disk.
"""
file_path = self._get_local_file_path(url=url, file_name=local_file_name)
async with self._http.get(url=url) as response:
if response.status != 200:
raise Exception(f"Failed to download file: {response.status}")
if self._verbose:
print(f"Downloading file from {url} to {file_path}...")
total_size = int(response.headers.get("content-length", 0))
with tqdm_asyncio(
total=total_size, unit="B", unit_scale=True, desc="Downloading"
) as progress_bar:
async with aiofiles.open(file_path, "wb") as file:
async for data in response.content.iter_chunked(self._chunk_size):
await file.write(data)
progress_bar.update(len(data))
return file_path
async def close(self) -> None:
"""Closes the HTTP session."""
await self._http.close()
def _get_local_file_path(self, url: str, file_name: Optional[str]) -> Path:
"""Gets the path to save the downloaded file to.
Args:
file_name (Optional[str]): The name of the file to save the downloaded file to. Defaults to None.
Returns:
Path: The path to save the downloaded file to.
"""
if file_name is None:
file_name = unquote(urlparse(url).path.split("/")[-1])
file_path = self._output_dir / file_name
if file_path.exists():
# raise Exception(f"File {file_path} already downloaded.")
print(f"WARNING. File {file_path} already exists. Overwritting...")
return file_path
Image Optimizer
We can reduce the file sizes of DALL-E 3 creations significantly by resizing and quantizing them. With Pillow, these two operations require few lines of code. We wrap these in a class:
from pathlib import Path
from PIL import Image
class ImageOptimizer:
def __init__(self, max_width: int, quantize: bool, image_quality: int) -> None:
"""Initializes a new instance of the ImageOptimizer class.
Args:
max_width (int): The maximum width of the image in pixels.
quantize (bool): Whether to quantize the image to reduce file size.
image_quality (int): The quality of the image when saving it to disk from 1 to 100.
"""
self._max_width = max_width
self._quantize = quantize
self._image_quality = image_quality
def run(self, input_path: Path) -> Path:
"""Optimizes an image and stores the image on disk.
Args:
input_path (Path): The path to the image to optimize.
Returns:
Path: The location of the optimized image on disk.
"""
output_path = input_path.parent / f"{input_path.stem}-optimized.jpg"
img = Image.open(input_path)
img = img.convert("RGB")
img.thumbnail(size=(self._max_width, self._max_width), resample=Image.LANCZOS)
if self._quantize:
# Quantize image to reduce file size. Pillow converts the image to a
# palette image with at most 256 colors. This is done by storing 1 byte for
# each pixel instead of storing 3 bytes for R, G and B for each pixel.
# The single byte is used to store the index into the palette.
img = img.quantize()
if output_path.suffix.lower() in [".jpg", ".jpeg"]:
# Convert to RGB before saving to JPEG to avoid errors.
img = img.convert("RGB")
img.save(output_path, optimize=True, quality=self._image_quality)
return output_path
HTML Creator
Once we generate some text and pair it with an image, we transform the content into a single email artifact. The alternative, using linked images, entails figuring out how to host images for our out-of-office emails. Is it worth it? Not for a project of this scale.
We can easily sidestep the hassle of image hosting. MIME is a standard that allows us to combine text and images by constructing an HTML-based email with the image embedded in the markup.
For the email layout, we create an external file to use as an HTML template. No point in hard-coding it, as nobody wants that code smell lingering around. This template contains three customizable fields: one for the text and the other two fields for the image.
We give the text a touch of HTML styling. Nothing too gaudy, just enough to make it have the proper spacing. To encode the image binary into text so it can be embedded in the markup, we apply Base64.
Here is the code:
from base64 import b64encode
from pathlib import Path
import aiofiles
class AutoReplyHtmlCreator:
"""Represents a class that creates the HTML for an auto-reply message."""
def __init__(self, template_file_path: Path) -> None:
"""Initializes a new instance of the AutoReplyHtmlCreator class.
Args:
template_file_path (Path): The path to the HTML template to use.
"""
self._template_file_path = template_file_path
if not template_file_path.exists():
raise ValueError(f"File {template_file_path} not found")
async def run(self, message: str, image_file_path: Path, output_path: Path) -> str:
"""Creates the HTML for an auto-reply message.
Args:
message (str): The message to include in the auto-reply.
image_file_path (Path): The path to the image to include in the auto-reply.
output_path (Path): The path to save the HTML to.
Returns:
str: The HTML for the auto-reply message.
"""
async with aiofiles.open(self._template_file_path, "r") as file:
template = await file.read()
async with aiofiles.open(image_file_path, "rb") as file:
image_data = await file.read()
image_base64 = b64encode(image_data).decode("utf-8")
message_in_html = message.replace("nn", "</p><p>")
message_in_html = message_in_html.replace("n", "<br/>")
message_in_html = f"<p>{message_in_html}</p>"
html = template.replace("{{CONTENT}}", message_in_html)
html = html.replace("{{IMAGE_BASE64}}", image_base64)
html = html.replace("{{IMAGE_CONTENT_TYPE}}", "image/jpeg")
async with aiofiles.open(output_path, "w") as file:
await file.write(html)
return html
User Interface
The last missing piece is the user interface. For this, we use Streamlit to create a basic web interface as our needs are simple:
- A neat list displaying previously crafted digital masterpieces.
- Buttons to cook up a pair of fresh text message and image.
- A place to tweak the text message to ironing out any creases.
- A button to set out-of-office setting in Outlook programmatically.
The UI makes use of all of our components to deliver the above functionality. In the code listing below, I’ve tried to spare you, dear reader, the nitty-gritty of the entire code. If you are interested, you are welcome to peruse the full implementation on GitHub:
from pathlib import Path
from typing import cast
import anyio
import streamlit as st
from roll.config import settings
from roll.data import ActiveOutOfOfficeSetting, AutoReplyRecord, DataRepository
from roll.email import AutoReplyHtmlCreator, OutlookAutoReplyClient
from roll.image import ImageOptimizer
from roll.io import FileDownloader
from roll.models import AutoReplyContentGenerator
from roll.utils import utcnow
class StreamlitApp:
def __init__() -> None:
...
async def run(self) -> None:
await self._setup_page_config()
await self._build_sidebar()
await self._build_main_content()
async def _setup_page_config(self) -> None:
...
async def _build_sidebar(self) -> None:
...
async def _build_main_content(self) -> None:
...
async def _render_navbar(self, record: AutoReplyRecord) -> None:
...
async def _create_new_content(self) -> None:
...
async def _generate_message(self, record: AutoReplyRecord) -> None:
...
async def _generate_image(self, record: AutoReplyRecord) -> None:
...
async def _set_out_of_office(self, record: AutoReplyRecord) -> None:
...
async def main() -> None:
"""Main entry point of the UI."""
app = StreamlitApp(
data_dir=Path("data/repository"),
ai_config_path=Path("config/auto-reply-content-gen.aiconfig.json"),
html_template_file_path=Path("config/auto-reply-template.html"),
oof_data_dir=Path("data/oof"),
outlook_login_name=settings.LOGIN,
outlook_password=settings.PASSWORD,
outlook_account_name=settings.ACCOUNT_NAME,
)
await app.run()
if __name__ == "__main__":
anyio.run(main)
Below is a screencast of the user interface. The sidebar on the left is a gallery, which displays our past creations. Clicking the edit (✏️) button in the sidebar brings the content to the main stage on the right. Here, we can tweak the existing message or conjure up a new one, along with its visual counterpart. You can easily let the LLM produce texts in other languages, just tweak the prompt in the AIConfig file as discussed in the article. The Set out-of-office button takes the displayed content and sets it as our Outlook out-of-office message.
Time to Wrap Up
We went through the process of developing a Python program that creates atypical out-of-office emails using GPT-4 and DALL-E 3.
Sure, we could slap together a monstrous 1000-line script to achieve our goals quickly. But we didn’t. We followed our software engineering principles — ingrained into us by years of brainwashing, and eager to do the same for the new generation of wide-eyed software engineers.
Instinctively, we recalled that software engineering is more about thoughtful design than mere coding. Therefore, we started the undertaking by carefully considering how to structure the code.
We attempted to organize our code into classes, using descriptive names and type hints for clarity. Trying to write as little code as possible, we relied on other engineers’ work by using their Python libraries to solve our problems.
While we’ve set aside the scheduling aspect for a later, unspecified date, we’ve laid a solid foundation. Our app can generate whimsical yet professional auto-replies with relevant images. On top of that, it allows us to seamlessly integrate the emails into Outlook with a single click of a button.
So, next time you’re away — whether on a holiday, in a full-day workshop, or just taking a nap after a hearty lunch — why not let Generative AI generate your auto-relies, which may leave your colleagues smiling and, perhaps, wanting more.
Thank you for reading. If you fancy more articles like this in the future, follow me on Medium or connect on LinkedIn.
Out-of-Office Emails Are Boring: Making Them Pop with Generative AI was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story.
Originally appeared here:
Out-of-Office Emails Are Boring: Making Them Pop with Generative AI
Go Here to Read this Fast! Out-of-Office Emails Are Boring: Making Them Pop with Generative AI