Prakhar Srivastav
February 09, 2023

Motivational quotes were quite the rage back in the day when MMS & email forwarding were popular. I remember my parents forwarding me at the start of every morning. Fast forward to today, if you are lucky, you are part of some forward group on your messaging app of choice (Whatsapp, Telegram, etc.).
Inspired by the same idea, today we are going to build a service that sends our friends and family an AI generated motivational quote-of-the-day. Rather than hardcoding a list of motivational quotes, we are going to use a machine learning model to generate a quote on demand, so that we never run out of quotes to share!

OpenAI GPT-2 model was proposed in Language Models are Unsupervised Multitask Learners by Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei and Ilya Sutskever. It’s a causal transformer pre-trained using language modeling on a very large corpus of ~40 GB of text data.
To simplify this, at a high level OpenAI GPT2 is a large language model that has been trained on massive amounts of data. This model can be used to predict the next token in a given sequence.
If that sounds too complicated, don't worry, you don't need to know any Machine Learning or AI to follow along with this project. Libraries such as Hugging Face make using this model in our app very easy.
We'll use the Hugging Face library to load and serve the ML model that will generate the quotes for us. Hugging Face makes it very easy to use transformer models (of which GPT2 is a type) in our projects without any knowledge of ML or AI. As mentioned earlier, GPT2 is a general purpose language model which means that it is good at predicting generic text given an input sequence. In our case, we need a model more suited for generating quotes. To do that, we have two options:
Luckily, in our case there’s a fine-tuned model that has been trained on the 500k quotes dataset - https://huggingface.co/nandinib1999/quote-generator
With Hugging Face, using this model is as easy as as creating a tokenizer
Copied!
from transformers import AutoTokenizer, AutoModelWithLMHead, pipelinetokenizer = AutoTokenizer.from_pretrained("nandinib1999/quote-generator")
then, constructing a model from the pretrained model
Copied!
model = AutoModelWithLMHead.from_pretrained("nandinib1999/quote-generator")
and finally, constructing the generator which we can use to generate the quote
Copied!
generator = pipeline("text-generation", model=model, tokenizer=tokenizer)# use a starting promptgenerator("Keep an open mind and")[{'generated_text': 'Keep an open mind and a deep love for others'}]
Now that we have a way to generate quotes for us, we have to think about how we can use this in our app. There are multiple ways to go about building this.
A key plus point of the second option is that once the model is loaded the API can respond to us quickly and can be used in other applications as well. FWIW, the first option is a totally valid approach as well.
We can use Fast API to build a quick serving API. Here's what that looks like
Copied!
# in file api.pyfrom pydantic import BaseModelfrom fastapi import FastAPI, HTTPExceptionfrom transformers import AutoTokenizer, AutoModelWithLMHead, pipeline## create the pipelinetokenizer = AutoTokenizer.from_pretrained("nandinib1999/quote-generator")model = AutoModelWithLMHead.from_pretrained("nandinib1999/quote-generator")generator = pipeline("text-generation", model=model, tokenizer=tokenizer)app = FastAPI()class QuoteRequest(BaseModel):text: strclass QuoteResponse(BaseModel):text: str### Serves the Model API to generate quote@app.post("/generate", response_model=QuoteResponse)async def generate(request: QuoteRequest):resp = generator(request.text)if not resp[0] and not resp[0]["generated_text"]:raise HTTPException(status_code=500, detail='Error in generation')return QuoteResponse(text=resp[0]["generated_text"])
Let's test it out
Copied!
$ uvicorn api:appINFO: Started server process [40767]INFO: Waiting for application startup.INFO: Application startup complete.INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
Now we can start sending requests to the /generate endpoint that will generate a quote for us.

Now that we have a way to generate quotes on demand, we can stop here and start working on sending this via Courier. But who are we kidding, no one reads text anymore! We can make this interesting by using a nice image and placing our quote on it to make it look like a poster.
Given our API, we can now do the following to generate a quote
Copied!
from random import choice# feel free to add more starting prompts for more varietycanned_seeds = ["Always remember to", "Start today with", "It is okay to"]seed = choice(canned_seeds)resp = requests.post('http://127.0.0.1:8000/generate', data=json.dumps({"text": seed}))return resp.json()["text"]
The first challenge is getting a beautiful background image for our quote. For that, we'll use the Unsplash API that provides a nice endpoint to return a random image matching a query. Opening https://source.unsplash.com/random/800×800/?nature in our browser returns a nice nature image.
To keep things interesting, we can use different query terms such as stars, etc. Here's the how the code for downloading our background image looks like -
Copied!
from random import choiceimage_backgdrops = ['nature', 'stars', 'mountains', 'landscape']backdrop = choice(image_backdrops)response = requests.get("https://source.unsplash.com/random/800×800/?"+ backdrop, stream=True)# write the output the img.png on our filesystemwith open('img.png', 'wb') as out_file:shutil.copyfileobj(response.raw, out_file)del response
Ok, now we have our background image and a quote which means we can work on assembling the final image that will be sent to the recipients. At a high level we want to place some text on an image but even this simple task can be challenging. For starters, there are a number of questions for us to answer
The answers to some of these questions are more complicated than others. To keep it simple, we'll put the text in the center, and do some wrapping so that it looks good. Finally, we'll use a light color text for now. For all image manipulation, we'll use Python Image Library (PIL) to make this easy for us.
Copied!
# use the image we downloaded in the above stepimg = Image.open("img.png")width, height = img.sizeimage_editable = ImageDraw.Draw(img)# wrap textlines = textwrap.wrap(text, width=40)# get the line count and generate a starting offset on y-axisline_count = len(lines)y_offset = height/2 - (line_count/2 * title_font.getbbox(lines[0])[3])# for each line of text, we generate a (x,y) to calculate the positioningfor line in lines:(_, _, line_w, line_h) = title_font.getbbox(line)x = (width - line_w)/2image_editable.text((x,y_offset), line, (237, 230, 211), font=title_font)y_offset += line_himg.save("result.jpg")print("generated " + filename)return filename
This generates the final image called result.jpg
For the penultimate step, we need to upload the image so that we can use that with Courier. In this case, I'm using Firebase Storage but you can feel free to use whatever you like.
Copied!
import firebase_adminfrom firebase_admin import credentialsfrom firebase_admin import storagecred = credentials.Certificate('serviceaccount.json')firebase_admin.initialize_app(cred, {...})bucket = storage.bucket()blob = bucket.blob(filename)blob.upload_from_filename(filename)blob.make_public()return blob.public_url
Finally, we have everything we need to start sending our awesome quotes to our friends and family. We can use Courier to create a good looking email template.

Sending a message with Courier is as easy as it gets. While Courier has its own SDKs that can make integration easy, I prefer using their API endpoint to keep things simple. With my AUTH_TOKEN and TEMPLATE_ID in hand, we can use the following piece of code to send our image
Copied!
import requestsheaders = {"Accept": "application/json","Content-Type": "application/json","Authorization": "Bearer {}".format(os.environ['COURIER_AUTH_TOKEN'])}message={"to": { "email": os.environ["COURIER_RECIPIENT"] },"data": {"date": datetime.today().strftime("%B %d, %Y"),"img": image_url ## this is image url we generated earlier},"routing": {"method": "single","channels": ["email"]},"template": os.environ["COURIER_TEMPLATE"]}requests.post("https://api.courier.com/send", json={"message": message}, headers=headers)
The API key can be found in Settings and the Template ID can be found in the template design's settings. And that's it!
This tutorial demonstrated how easy it is to get started with machine learning & Courier.
If you want to go ahead and improve this project, here are some interesting ideas to try
Prakhar is a senior software engineer at Google where he works on building developer tools. He's a passionate open-source developer and loves playing the guitar in his free time.
🔗 Fast API

How Top Notification Platforms Handle Quiet Hours & Delivery Windows in 2026
No platform offers per-template delivery windows in 2026—it's either per-workflow (Customer.io, Knock), per-campaign (Braze), or global settings. This comparison shows exactly how six platforms handle quiet hours and send time controls based on their documentation and API specs. Braze leads on AI timing (23% open rate lift from Intelligent Timing across their customer base). Novu is the only platform letting subscribers set their own delivery windows. Customer.io and Knock require manual workflow configuration. OneSignal's strength is push-specific optimization across 300K+ apps. Courier combines per-node flexibility with API control. Includes feature matrix, timezone handling, and frequency capping differences.
By Kyle Seyler
January 16, 2026

Notification Observability: How to Monitor Delivery, Engagement, and Provider Health
Notification observability is the practice of monitoring notification delivery, engagement, and provider health using the same tools and discipline you apply to the rest of your application infrastructure. It means tracking whether messages are delivered, opened, and acted on across email, SMS, push, and in-app channels, then surfacing that data in dashboards alongside your other application metrics. Key metrics include delivery rate by channel, bounce and failure rates, provider latency, open rate trends, and click-through rates by template. Teams can build notification observability through DIY webhook handlers that pipe provider events to Datadog or Prometheus, log aggregation from application send logs, or notification platforms with built-in observability integrations. This matters most for multi-channel systems, business-critical notifications like password resets and payment confirmations, and teams using multiple providers with fallback routing.
By Kyle Seyler
January 15, 2026

SMS Opt-Out Rules in 2026
TCPA consent rules changed in April 2025. Consumers can now revoke consent using any reasonable method, including keywords like "stop," "quit," "end," "revoke," "opt out," "cancel," or "unsubscribe." Businesses must honor opt-out requests within 10 business days, down from 30. The controversial "revoke all" provision, which would require opt-outs to apply across all automated messaging channels, has been delayed until January 2027 and may be eliminated entirely. SMS providers like Twilio handle delivery infrastructure and STOP keyword responses at the number level. They don't sync opt-outs to your email provider, push notification service, or in-app messaging. That cross-channel gap is your responsibility. Courier provides unified preference management that enforces user choices across SMS, email, push, and chat automatically.
By Kyle Seyler
January 13, 2026
© 2026 Courier. All rights reserved.