I build workflows that replace manual work permanently — lead gen pipelines, CRM automation, AI agents, and data systems that run 24/7 without babysitting.
Daily-scheduled AI pipeline that discovers packaging companies, finds decision-makers, verifies emails, and pushes qualified leads to Pipedrive — fully hands-off.
Innovapack's sales team was manually Googling packaging companies, copy-pasting names into LinkedIn, guessing email formats, and dropping qualified leads into a spreadsheet by hand. It was eating 4-5 hours a day and still missing hundreds of prospects.
Schedule Trigger fires daily. AI extracts company names from SerpAPI search results. A deduplication layer checks Google Sheets to skip already-processed domains. For each new company, a second AI call finds the decision-maker's name and title. Hunter.io verifies the email. A qualification AI scores the lead against Innovapack's ICP. Qualified leads are pushed to Pipedrive with full context.
20-30 new qualified leads per day with zero manual input. The team wakes up to a full pipeline instead of an empty spreadsheet.
Multi-search fallback: if a single SERP result is insufficient, the workflow aggregates from multiple search queries before qualifying.
Lock mechanism: Google Sheets append-with-dedup prevents the same company being processed twice across runs.
AI models: OpenRouter with two separate models — fast model for extraction, reasoning model for qualification scoring.
Intake form accepts a name, CSV, Excel, or image — AI extracts and enriches contact data, finds emails via two providers with fallback, and writes clean records to Google Sheets.
Sales reps were manually converting prospect lists from business card photos, PDFs, and spreadsheets into usable CRM records — hunting for emails one by one, copy-pasting into HubSpot by hand. The process was inconsistent and slow.
A Form Trigger accepts text, CSV, Excel, or an image. A router detects the input type. Mistral OCR handles images. AI normalizes all formats into a unified contact schema. A dual email-finder runs Snov.io first, falls back to Hunter.io. If email is still missing, SERP enrichment digs up context. Verified contacts are written to Google Sheets in a HubSpot-ready format.
Any team member can drop in a business card photo or messy spreadsheet and get a clean, enriched, deduped list ready for HubSpot import — in seconds.
58-node workflow with 4 parallel enrichment paths that converge via Merge nodes before final output.
Name-based email finding: separate sub-flow handles cases where only a name and company domain are known.
Multi-format extraction: XLSX, CSV, image, and raw text all normalize into the same schema before any enrichment runs.
AI reads email history for each contact, drafts the next outreach email, routes it to Slack for one-click approval, sends it via Brevo, logs it back to HubSpot, and schedules the next follow-up task automatically.
A sales team was manually reading through HubSpot contact histories, writing individual follow-up emails, and manually scheduling the next task after each send. Every rep was spending 30-50 hours a week on work that required no real judgment.
A Webhook fires when a HubSpot task comes due. The workflow pulls the contact's full email history, strips HTML, and builds AI context. An AI agent drafts the next outreach email. A Slack message with Approve / Reject / Revise buttons is sent to the rep. On approval, Brevo sends the email, HubSpot logs it, the task is marked complete, and a new follow-up task is scheduled automatically.
Reps review and approve emails — they don't write them. The full cycle from task due to next task scheduled runs in under 60 seconds.
114-node workflow with full human-in-the-loop: approval, rejection, revision request, and re-generation paths all handled.
Context building: email history sorted by date, HTML stripped, truncated to fit AI context window with relevance preserved.
Timeout guard: if no Slack response within window, a timeout notification fires and the task stays open for manual handling.
One-shot migration for Benjamin: pulled funds, professional contacts, and personal contacts from three Notion databases, transformed and deduplicated records, and pushed clean organizations and people to Pipedrive.
Benjamin had years of contacts and fund data spread across three separate Notion databases with inconsistent field names, duplicate entries, and mixed relationship types. Moving to Pipedrive manually would have taken days and introduced errors.
The workflow reads all three Notion databases in parallel — Funds, Professional Contacts, Personal Contacts. Custom Code nodes transform each schema to Pipedrive's data model. A Merge node consolidates all people records. Deduplication runs before any Pipedrive writes. Organizations are created first, then people are associated to the correct org. A Google Drive attachment handler migrates linked files.
Clean, deduped migration completed in one execution. All relationships preserved. No manual cleanup needed on the Pipedrive side.
Schema transformation: Notion's property format (nested objects with type keys) requires substantial flattening before Pipedrive's flat API can accept it.
Relationship preservation: Fund → Organization mapping maintained so contacts land in the correct Pipedrive org on creation.
File migration: Google Drive export links processed and attached to records.
n8n control layer for an autonomous Upwork scraper: AI ranks scraped jobs by score, approved jobs are queued for automated application, and Telegram handles real-time approval and confirmation.
Manually reviewing Upwork job feeds, evaluating fit, writing proposals, and submitting applications across dozens of jobs per day is a full-time job in itself. The goal was to automate the entire pipeline while keeping human judgment in the loop for final approval.
A Form Trigger gives control (Run, Pause, Status) via a web UI. Scraped jobs are passed to an AI ranking node that scores each job 0-10 for automation relevance, client quality, and opportunity. Jobs scoring ≥5 are written to Google Sheets. Approved jobs trigger HTTP calls to the Playwright-based scraper which handles the actual application. Webhooks return confirmation results from the scraper back to n8n, which logs outcomes and sends a Telegram notification.
The full loop — scrape → rank → approve → apply → confirm — runs with one click. Human time is spent only on reviewing AI-ranked candidates, not on finding or applying to them.
Bidirectional control: n8n both controls the external Playwright process via HTTP and receives webhook callbacks from it — acting as the orchestration layer.
AI ranking prompt: structured scoring across three dimensions (automation fit, client quality, opportunity) with specific penalties for low budgets, unverified clients, and disqualifying criteria.
Approval gate: Google Sheets acts as the human-review queue — jobs sit there until manually approved before the apply pipeline fires.
If your team is copy-pasting, manually following up, or running the same process every day — that's the workflow I build next.
View Upwork Profile Email me