✦ Register Now ✦ Take the 30 Day Cost-Savings Challenge

Exposing Python Scripts as API in Production with Yeedu Functions

A data engineer’s dream - write Python, click Deploy, get an API.

Yeedu Functions Flow
Yeedu Functions Flow

Every data or ML engineer knows this story: you’ve built a great Python program - maybe it scores insurance claims, analyzes patient notes, or validates datasets. It works perfectly in your local environment. But when it’s time to share it or connect it to real applications, that is to turn a Python Script into an API, the deployment friction begins: Docker containers, API gateways, scaling limits, security tokens… things get messy fast.

That’s where Yeedu Functions comes in. It lets you deploy any Python file as a production‑ready REST API on the Yeedu Functions managed compute - with complete control, governance, and observability. No external cloud setup. No juggling multiple services. Just your code, running in Yeedu

What Is Yeedu Functions?

Yeedu Functions lets you execute and serve Python programs as REST APIs inside your Yeedu Workspace, acting as a python function as a service purpose built for data teams. The underlying platform is Yeedu, a unified data platform designed to orchestrate workloads across clouds.

It’s built for data‑driven enterprises - insurance, pharma, life sciences - where Python logic must move from the lab to production quickly, securely, and at scale. You upload a .py file, define which function to expose, and Yeedu handles:

  • Packaging your Python file as a REST API and deployment on workspace compute
  • Authentication via bearer token
  • Scaling and concurrency management
  • Monitoring, logging, and control

It’s Python -> API - without the DevOps bottleneck

How It Works

Here’s a streamlined workflow for how Yeedu Functions helps you build an API with Python in minutes:

Create or Import Your Python File

Upload your .py file (for example, iris_model.py) into your Yeedu Workspace.

Upload the script file to the Workspace Files at the desired location
Upload the script file to the Workspace Files at the desired location

Create a Yeedu Function Job

Go to Jobs → Create Job → Yeedu Function, and point it to your iris_model.py file. Configure options like project path, parallelism, concurrent request limits, maximum scale.

Create the Yeedu Functions job with the required configurations and dependencies
Create the Yeedu Functions job with the required configurations and dependencies

Define Your Entry Method and Dependencies

Specify the Python method to expose (e.g., prediction(...)) and list PyPI dependencies (like pandas, pickle, etc.). This is where deploying an ML model as an API becomes straightforward.  

Here’s the code inside iris_model.py:

import os 
import pickle 
import pandas as pd 
 
model = None 
 
def init(): 
    """ 
    Load the ML Model. 
    This method is called once when the service starts. 
    """ 
    global model 
    try: 
        # Load the model from the pickle file 
        with open(f"/files/iris_prediction/iris.pkl", "rb") as f: 
            model = pickle.load(f) 
        if model: 
            print("Model loaded successfully") 
    except Exception as error: 
        raise Exception(f"Error loading the model: {error}") 
 
def prediction(payload, context): 
    """ 
    Make a prediction using the loaded ML model. 
 
    Args: 
        payload (dict): JSON payload containing features for prediction. 
        context (dict): Contextual information such as request_id, etc. 
 
    Returns: 
        dict: Prediction result or error message. 
    """ 
    try: 
        # Extract features from payload 
        # Expecting payload to have keys: sepal_length, sepal_width, petal_length, petal_width 
        features = { 
            "sepal_length": payload.get("sepal_length"), 
            "sepal_width": payload.get("sepal_width"), 
            "petal_length": payload.get("petal_length"), 
            "petal_width": payload.get("petal_width"), 
        } 
 
        # Check if all features are provided 
        if None in features.values(): 
            return { 
                "status": "error", 
                "message": "Missing one or more features: sepal_length, sepal_width, petal_length, petal_width", 
                "request_id": context.get("request_id") 
            } 
 
        # Create DataFrame for prediction 
        input_data = pd.DataFrame([features]) 
 
        # Make prediction 
        prediction = model.predict(input_data)[0]  # Get the first (and only) prediction 
 
        # Optionally, get prediction probabilities if the model supports it 
        if hasattr(model, "predict_proba"): 
            probabilities = model.predict_proba(input_data)[0].tolist() 
            probability_dict = dict(zip(model.classes_, probabilities)) 
        else: 
            probability_dict = {} 
 
        # Prepare response 
        response = { 
            "status": "success", 
            "prediction": prediction, 
            "probabilities": probability_dict 
        } 
 
        return response 
 
    except Exception as e: 
        return { 
            "status": "error", 
            "message": str(e) 
        } 
 
def shutdown(): 
    """ 
    Shutdown the ML model if necessary. 
    """ 
    global model 
    if model: 
        # If the model requires any cleanup, do it here 
        model = None 
        print("Model unloaded successfully") 

This pattern allows stateful model loading, making it ideal for serving ML models as Rest API endpoints.

Click “Create” and Run the Job

Yeedu packages your code and instantly exposes it as a secure serverless Python API.

Invoke the Endpoint Anywhere

You’ll find endpoint details in your job configuration screen within Yeedu. Use it from apps, data pipelines, or workflows - authenticated with your bearer token.  

Call the functions API using the provided URL
Call the functions API using the provided URL
Call the functions API using the provided URL
Yeedu Functions Logs
Yeedu Functions Logs

Example: Deploying a Sentiment Analysis Function

Here’s a small Python file named sentiment.py:

from textblob import TextBlob 
 
def analyze_sentiment(text: str): 
    score = TextBlob(text).sentiment.polarity 
    if score > 0: 
        return {"sentiment": "positive"} 
    elif score < 0: 
        return {"sentiment": "negative"} 
    else: 
        return {"sentiment": "neutral"} 

Once deployed, you can invoke it directly:

POST https://<tenant>.yeedu.ai/functions/analyze_sentiment/invoke 
Authorization: Bearer <yeedu_token> 
Content-Type: application/json 
 
{ 
  "text": "Yeedu makes deployment effortless!" 
} 

Response:

{ 
  "sentiment": "positive" 
} 

No containers. No API Gateway. Just a Python file as a REST API, live in production

Real‑World Scenarios: Insurance, Pharma & Life Sciences

Yeedu Functions is deisgned for real enterprise use cases where exposing Python logic as APIs in production is critical.

Life Sciences: Clinical Text Mining

Challenge: Extracting adverse‑drug reaction pairs from unstructured clinical notes.  

Yeedu Solution: Deploy clinical_nlp.py exposing extract_entities(), and then call it from EHR workflows.

Insurance: Fraud Detection & Risk Scoring

Challenge: Real‑time model scoring at claim intake.  

Yeedu Solution: Deploy fraud_score.py exposing score_claim(); call it from claim‑processing systems.

Pharma: Molecular Property Prediction

Challenge: Serving models that predict toxicity or solubility for new compounds. Yeedu Solution: Deploy toxicity_model.py exposing predict_properties() for chemists to call via internal apps.

Whether it’s a rule engine, NLP pipeline, or model inference - Yeedu Functions makes operationalizing Python logic effortless

Why Yeedu Simplifies What Clouds Complicate

Traditional cloud platforms require juggling between multiple services just to build and deploy a Python REST API. Here’s how the flows compare:

AWS Lambda Flow

AWS Lambda Flow

Azure Functions Flow

Azure Functions Flow
Azure Functions Flow

Google Cloud Functions Flow

Google Cloud Functions Flow
Google Cloud Functions Flow

Yeedu Functions Flow

Yeedu Functions Flow
Yeedu Functions Flow

One environment. One flow. One click. Yeedu enables serverless Python APIs without forcing data teams to become infrastructure experts.  

Cloud vs. Yeedu Comparison

Feature AWS Lambda Azure Functions GCP Functions Yeedu Functions
Where It Runs AWS Cloud Azure Cloud GCP Cloud Yeedu Workspace Compute
Setup Complexity Multi-step Multi-step Multi-step Single-step
Scaling Configurable Plan-based Auto Configurable Inside Yeedu
Authentication IAM / Cognito Azure AD IAM Bearer Token (Yeedu)
Monitoring CloudWatch App Insights Cloud Logging Yeedu Dashboard
Data Access External External External Native Workspace Access
Users DevOps Developers Developers Data Engineers & Scientists

Why Teams Love Yeedu Functions

  • Zero external services. Everything runs inside Yeedu’s governed environment.
  • Data‑native execution. Connects directly to your data and secrets stores.
  • Instant deployment. Publish APIs in minutes, not days.
  • Enterprise‑grade security. Built on Yeedu’s workspace authentication & controls.
  • Full observability. Logs, metrics, usage analytics built‑in.

Intuitive UX & Monitoring Built In

Beyond just deployment, Yeedu focuses heavily on the user experience for data engineers, analysts, and scientists. The UX is streamlined:

  1. Select Python Script - Choose any uploaded script or notebook.
  1. Define Your Function - Pick the method to expose (must follow payload, context input pattern).
  1. Configure Runtime - Optionally list dependencies, environment variables, project paths.
  1. Deploy & Test Instantly - One click, REST endpoint is live.

Once live, you can monitor and manage the API directly within the UI - without writing backend or infrastructure code. Real‑time request metrics, concurrency controls, and logs are baked into the job view.  

Example Template & Use Cases

Function Syntax

global model, connection 
 
def init(): 
    global model 
    # initialize ML model, DB connections, etc. 
    pass 
 
def my_function(payload, context): 
    result = process(payload) 
    return {"result": result} 

H3: Use Case: ML Model Inference 

from sklearn.datasets import load_iris 
from sklearn.ensemble import RandomForestClassifier 
 
def init(): 
    global model, target_names 
    iris = load_iris() 
    model = RandomForestClassifier(n_estimators=100, random_state=42) 
    model.fit(iris.data, iris.target) 
    target_names = iris.target_names 
 
def predict_iris_species(payload, context): 
    features = payload["features"] 
    prediction = model.predict([features]) 
    probabilities = model.predict_proba([features]) 
    confidence = float(max(probabilities[0])) 
    predicted_species = target_names[prediction[0]] 
    return { 
        "prediction": predicted_species, 
        "confidence": round(confidence, 3), 
        "request_id": context.get("request_uuid", "") 
    } 

Use Case: PySpark Data Processing

from pyspark.sql import SparkSession 
 
def init(): 
    global spark 
    spark = SparkSession.builder.appName("YeeduFunction").getOrCreate() 
 
def process_spark_data(payload, context): 
    data = [(1, "Alice", 34), (2, "Bob", 45), (3, "Charlie", 29)] 
    df = spark.createDataFrame(data, ["id", "name", "age"]) 
    filtered_df = df.filter(df.age > 30) 
    return {"result": filtered_df.toJSON().collect()} 

Use Case: External API Integration

import openai 
import os 
 
def generate_text(payload, context): 
    client = openai.OpenAI(api_key=os.getenv("OPENAI_KEY")) 
    response = client.chat.completions.create( 
        model="gpt-3.5-turbo", 
        messages=[{"role": "user", "content": payload["prompt"]}] 
    ) 
    return {"response": response.choices[0].message.content} 

Conclusion: Your Python Scripts Deserve Better

The journey from local script to production API shouldn’t require weeks of infrastructure work. With Yeedu Function, exposing Python as an API in production becomes one-click operation.

Why choose Yeedu Functions?

  • 5‑10× faster deployment
  • Built purpose‑built for data teams
  • Simpler than AWS Lambda or other cloud functions
  • Stateful model loading, native PySpark support
  • Built‑in performance metrics and monitoring

You wrote that Python script. Now let it serve your business as a production-grade REST API.

Ready to transform your Python scripts into production‑ready APIs? Explore Yeedu’s documentation and platform today: Yeedu Docs

Join our Insider Circle
Get exclusive content crafted for engineers, architects, and data leaders building the next generation of platforms.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
No spam. Just high-value intel.
Back to Resources