Building a work log distribution system: broadcasting context to where teams already are
The hardest part of documentation isn’t writing it. It’s making sure the right people actually see it. You can write brilliant work logs explaining decisions and tradeoffs, but if they live in a work-log directory that nobody remembers to check, they might as well not exist. The information has to flow to where people already are - Slack channels, Discord servers, project management tools, email inboxes.
This is about building a system that broadcasts work logs across multiple destinations automatically, with graceful degradation and per-project routing. But really it’s about the architecture choices that make broadcast systems maintainable, and the questions those choices raise about how teams communicate.
The problem: information locked in local files
I’ve been using Claude Code’s /close skill to generate daily work logs - editorial reflections on the day’s work, documenting not just what changed but why it mattered and what questions it raised. The format forces you to think beyond git commits and TODO lists. It’s meant to be read by humans trying to understand context.
But the logs lived in work-log/YYYY-MM-DD.md files in each project repository. To see what happened in the utilities project, you had to remember to check that directory. To understand work across six active projects, you needed to context-switch six times. The work logs were valuable - when you could find them.
The obvious solution: broadcast them. Post to Slack for the team channel, Discord for the community, Notion for searchable archive, email for stakeholders. Let the information flow to where people already are, instead of requiring them to come find it.
But broadcast systems are deceptively hard. The naive approach - write code that posts to Slack, then add Discord support, then bolt on Notion, then realize email is completely different - creates a tangled mess. When Slack’s webhook fails, does Discord still get the message? When you add a new destination, do you have to touch every posting call? How do you test any of this?
Architecture: modular utilities plus orchestrator
The key insight: treat each destination as an independent module that can succeed or fail without affecting the others. One Python script per destination, plus an orchestrator that calls them all and reports results.
The utilities:
post_to_slack.py- Slack webhook posting with mrkdwn conversionpost_to_discord.py- Discord webhook posting with colored embedspost_to_work_log_notion.py- Notion API integration to “Work log” databasepost_to_email.py- HTML email via Brevo or Gmail SMTPdistribute_work_log.py- Orchestrator that calls the others
Each utility is self-contained: it takes work log metadata and content, formats for its destination, posts, and returns success or failure. They don’t know about each other. They can be tested independently. They can be used outside the work log context - post_to_slack.py is just a Slack posting tool that happens to work well for work logs.
The orchestrator pattern:
# Simplified structure of distribute_work_log.py
def distribute(project, title, date, content):
results = {}
config = load_config(project)
if config.get('slack_webhook'):
results['slack'] = post_to_slack(...)
if config.get('discord_webhook'):
results['discord'] = post_to_discord(...)
if config.get('notion_enabled'):
results['notion'] = post_to_notion(...)
return summarize_results(results)
This is graceful degradation in practice. If Slack fails, Discord still gets the message. If Notion is down, the local work log file remains the source of truth. Each destination reports independently: “3 succeeded, 1 skipped, 0 failed.”
Per-project routing:
The configuration lives in work_log_config.json:
{
"utilities": {
"slack_webhook": "https://hooks.slack.com/...",
"discord_webhook": "https://discord.com/...",
"notion_enabled": true
},
"authexis": {
"slack_webhook": "https://hooks.slack.com/...",
"discord_webhook": null,
"notion_enabled": true
}
}
Each project opts into the destinations that make sense for its team. The utilities project posts to all three. Authexis skips Discord. A solo project might only use Notion. The infrastructure is consistent, but adoption is voluntary.
Design decisions and their tradeoffs
Why modular utilities instead of a monolithic poster?
Benefit: Reusability. post_to_slack.py can post anything to Slack, not just work logs. You could use it for deployment notifications, error alerts, whatever. Each utility solves one problem well.
Tradeoff: More files. Five utility scripts plus an orchestrator is heavier than one big posting function. But the complexity is organized - each file has a single clear purpose.
Why JSON config instead of environment variables?
Benefit: Per-project routing. Environment variables are global - you can’t say “utilities posts to Slack channel A, authexis posts to Slack channel B” without namespace gymnastics. JSON config makes per-project settings natural.
Tradeoff: Another file to maintain. But webhook URLs aren’t secrets (they’re just URLs), so they can be committed and version controlled. Changes to routing are tracked in git.
Why graceful degradation instead of fail-fast?
Benefit: Resilience. One broken webhook doesn’t stop the entire distribution. You get partial success, which is often good enough.
Tradeoff: Complexity in error handling. You need to track which destinations succeeded, report clearly, and make sure failure modes are observable. Silent partial failure is worse than loud total failure.
Why separate Notion integration instead of MCP server?
Benefit: Control. The Notion integration uses their API directly, with custom markdown-to-blocks conversion and property formatting tuned for work logs. We know exactly what it does.
Tradeoff: More code to maintain. An MCP server would abstract the Notion complexity, but at the cost of another dependency and less control over the conversion logic.
Format conversion: the hidden complexity
Each destination speaks a different dialect. Standard markdown uses **bold** and *italic*. Slack’s mrkdwn uses *bold* and _italic_. Discord wants embeds with color codes. Notion wants block objects. HTML email wants… HTML.
Slack conversion:
def convert_markdown_to_slack_mrkdwn(text: str) -> str:
import re
# Order matters: bold first, then italic
text = re.sub(r'\*\*([^*]+)\*\*', r'*\1*', text)
text = re.sub(r'(?<!\*)\*(?!\*)([^*]+?)(?<!\*)\*(?!\*)', r'_\1_', text)
text = re.sub(r'\[([^\]]+)\]\(([^\)]+)\)', r'<\2|\1>', text)
return text
The regex looks arcane, but it’s handling edge cases: don’t convert * inside **bold** when doing italic conversion. Don’t match partial asterisks. Get the link format right or Slack shows raw markdown.
Discord embeds:
Discord limits fields to 1024 characters. Long work log sections need chunking. Embeds support color coding (green for success themes, blue for analysis, orange for questions). The visual hierarchy helps scanning.
Notion blocks:
# Simplified markdown-to-blocks conversion
blocks = []
for line in markdown.split('\n'):
if line.startswith('## '):
blocks.append({
'type': 'heading_2',
'heading_2': {
'rich_text': [{'text': {'content': line[3:]}}]
}
})
elif line.startswith('- '):
blocks.append({
'type': 'bulleted_list_item',
'bulleted_list_item': {
'rich_text': [{'text': {'content': line[2:]}}]
}
})
This is simplified - the real version handles nested lists, code blocks, emphasis, and Notion’s 2000-character-per-block limit. But the pattern is: parse markdown structure, emit destination-specific objects.
The conversion code is where bugs hide. Testing requires posting to actual webhooks and inspecting results visually. Automated tests can verify structure, but not whether the message “looks right” in Discord.
Integration with workflows
The distribution system hooks into the /close skill - the end-of-day workflow that generates work logs. After creating the local markdown file and copying it to the blog, step 4.1 runs:
cd ../utilities && source venv/bin/activate && python distribute_work_log.py \
--project "dotfiles" \
--title "Work log: dotfiles - January 16, 2026" \
--date "2026-01-16T17:45:00-05:00" \
--categories "work-log,dotfiles" \
--tags "development,claude-code,skills" \
--content-file "../dotfiles/work-log/2026-01-16.md"
The orchestrator reads the config, determines which destinations are enabled for “dotfiles”, posts to each, and reports results:
✓ Slack: Posted
✓ Discord: Posted
✓ Notion: Posted (https://notion.so/...)
⊘ Email: Skipped (No recipients configured)
The workflow summary includes distribution status. If Slack fails, you know immediately. If all three succeed, the work log is archived in three places simultaneously - searchable in Notion, threaded in Slack, visible in Discord.
What this reveals about broadcast systems
Broadcast is cheap, targeting is expensive. Posting the same message to three webhooks is easy. But should the utilities team see phantasmagoria work logs in their Slack? Should stakeholders get all work logs or just high-level summaries? The infrastructure can broadcast to everyone, but intelligent routing requires understanding who needs what.
Graceful degradation makes failure visible. When Slack is down, the orchestrator reports “1 failed, 2 succeeded” instead of crashing. You see the partial failure, assess whether it matters, and continue. Silent failure - posting succeeds but nobody sees it - is worse than loud failure.
Format conversion is where complexity accumulates. Each destination has quirks. Slack’s mrkdwn italic regex needs negative lookahead to avoid double-converting bold. Discord embeds have field limits. Notion blocks have content limits. These aren’t bugs - they’re design choices by those platforms. But they multiply: three destinations with three quirks each means nine edge cases.
Local files remain source of truth. Despite broadcasting to three services, the canonical work log lives in work-log/YYYY-MM-DD.md. Notion could delete entries, Slack messages could disappear, Discord could be unavailable. The local markdown file is version controlled, backed up, and survives service outages. Broadcast systems augment, they don’t replace.
Voluntary adoption works better than mandates. The config lets each project opt into relevant destinations. Forcing all projects to post to all channels creates noise. Letting teams choose creates buy-in. Some projects use all four destinations, others just Notion, others only Slack. The system supports all patterns without judgment.
What went wrong and what we learned
Problem: Slack showed **bold** instead of rendering bold text.
The initial version posted raw markdown. Slack interpreted some constructs (links) but ignored others (bold, italic). Converted markdown worked, but the regex order matters - converting italic before bold breaks because **text** has asterisks that match the italic pattern.
Lesson: Test format conversion visually in the actual destination. Automated tests can verify that **text** becomes *text*, but not that Slack actually renders it bold.
Problem: Notion integration failed silently - no error, but no entry created.
Two causes: invalid API token (fixed by regenerating), then wrong date property format. Notion’s API changed between versions. The fix was reading their current docs and matching their exact property structure.
Lesson: External APIs change. Pin API versions if possible, or test against their current docs regularly. Silent failures are the worst - always include detailed error reporting.
Problem: Accidentally wiped .env file while updating Notion token.
Used Write tool instead of Edit. Lost all API keys - Brevo, OpenAI, domain registrars, everything. User restored from backup, but the blast radius was huge.
Lesson: Treat .env files like the sensitive data they are. Never overwrite, always append or edit. Added explicit warning to CLAUDE.md to prevent recurrence. The cost of .env destruction is disproportionate because every integration breaks simultaneously.
Problem: Projects had different TODO.md conventions - forcing standardization would destroy context.
Initial /close skill created TODO.md with a specific format. But existing projects used different structures: “Current Sprint” vs “Next session”, detailed subsections vs simple checkboxes.
Lesson: Preserve conventions when they exist, standardize when creating new. Read first, then adapt. Forcing rewrites destroys muscle memory and context. Let standardization emerge gradually through new files, not mandated conversions.
Cross-project synthesis: the /sum-up skill
With work logs flowing from multiple projects into Notion, Slack, and Discord, a new capability emerged: what if we could synthesize across all those logs to find patterns, connections, and strategic priorities?
The /sum-up skill does exactly this. It discovers all work logs from the last N days (default 7), sends them to Claude API with a carefully crafted synthesis prompt, and generates an editorial analysis that identifies:
Cross-project patterns:
- Similar problems surfacing in different contexts
- Architectural decisions appearing across multiple projects
- Recurring themes and tensions
Cross-pollination insights:
- Insights from project A that apply to project B
- Standardization opportunities
- Code or practices that could be reused
Meta-patterns about work process:
- How tools and infrastructure shape thinking
- Patterns in how decisions get made
- What’s changing about how work happens
Strategic priorities:
- Action items extracted from all logs
- Blockers needing resolution
- Where focus would have highest impact
The synthesis itself is a work log - posted to Notion with Type=‘Synthesis’, distributed to dedicated Slack and Discord channels. It’s editorial, not mechanical: thought-provoking questions, specific references to source logs, connections between tactical details and strategic patterns.
The architecture:
# Discover work logs across all projects
python synthesize_work_logs.py --discover --days 7
# Generate synthesis using Claude API
python synthesize_work_logs.py --days 7 \
--output polymathic-h/work-log/synthesis-2026-01-17.md
# Distribute like any other work log
python distribute_work_log.py --project "synthesis" \
--type "Synthesis" --content-file synthesis-2026-01-17.md
The synthesis utility discovers work-log/*.md files across all configured projects, parses YAML frontmatter and markdown content, and sends the complete context to Claude API. The prompt instructs the model to write in the same editorial voice as the work logs themselves - connecting dots between projects, raising questions, identifying what matters beyond immediate execution.
Why this works:
Work logs are already editorial thought pieces documenting decisions, tradeoffs, and questions. They’re semantic, not transactional. An LLM can read ten work logs and identify that Phantasmagoria’s linting work and Authexis’s human checkpoint decisions both wrestle with “when should automation pause for judgment?” That’s cross-pollination.
The synthesis quality depends on work log quality. Mechanical status reports produce mechanical summaries. Thoughtful reflection produces insightful synthesis. The infrastructure amplifies what you put in.
Example patterns from actual synthesis:
From January 15-16 synthesis: “When infrastructure becomes the work” - six projects simultaneously built workflow tooling (linters, distribution systems, format converters) rather than user features. The meta-pattern: infrastructure for thinking about work becomes the primary artifact.
Another: “The silent failure archipelago” - Authexis hit SwiftUI lifecycle cascades, Synaxis battled CDN caching, Phantasmagoria fought Stellaris silent syntax errors. Common thread: modern systems optimize for developer experience in ways that create invisible failure modes.
These aren’t insights you’d see in individual logs. They emerge from reading across projects with enough context to spot patterns.
The distribution twist:
Discord failed to post the first synthesis - too large for Discord’s 6000 character embed limit. Synthesis documents run 8000+ characters when they’re comprehensive. The graceful degradation pattern held: Slack and Notion succeeded, Discord failed visibly. The synthesis exists in searchable form (Notion) and threaded discussion form (Slack), which is probably sufficient.
This reveals a design question: should synthesis documents be shorter to fit Discord limits, or comprehensive at the cost of Discord compatibility? Right now they optimize for depth. Discord could work with truncation or link-to-full-version, but that hasn’t been implemented.
Where this could go
Bidirectional flow. Right now work logs broadcast out. What if Slack threads, Discord replies, and Notion comments flowed back? Not as automated sync - that’s fragile - but as suggestions: “Discussion in #utilities suggests adding X to tomorrow’s TODO.”
Smart routing based on content. If a work log mentions “breaking change” or “security”, should it automatically go to more destinations? Should stakeholder emails trigger based on keywords rather than static config?
Threading and context. Slack and Discord support threads. Should each day’s work log be a reply to the previous day’s, creating a threaded history? Or do fresh top-level posts work better for visibility?
Synthesis-driven planning. The “Where this could go” section of each synthesis document contains action items and priorities. Could those flow back into TODO.md files automatically? Or appear in next morning’s /start skill context?
Format evolution. Work logs are currently editorial narrative - “what happened, what it means, what questions it raises.” That works for one audience. Could the distribution system generate different views for different destinations? Stakeholders get high-level impact, teammates get technical decisions, maintainers get architectural reasoning?
Meta-learning about workflows. The /close skill codifies a workflow, but workflows evolve. If multiple projects consistently skip certain steps or add their own, maybe the canonical skill should adapt. Can tools learn from how they’re actually used?
The meta-question: does better tooling improve thinking?
Building infrastructure for documenting work changes how you think about the work. Knowing the work log will be posted to Slack makes you consider how teammates will interpret it. Knowing it’s archived in Notion makes you think about searchability. The broadcast destination shapes the content.
This could go two ways. Positive: the discipline of public documentation forces clearer thinking. Writing for an audience makes you articulate decisions explicitly. Negative: performing documentation crowds out actual work. The work log becomes the product rather than the record.
The line is probably around “does writing this help me understand my own work better?” If yes, the documentation is thinking-as-writing. If it feels like filling out a form for an audience, it’s become bureaucratic overhead.
The distribution system tries to stay on the right side by making broadcast automatic and graceful. You write the work log once, locally, in editorial format. The system handles routing to multiple destinations. You’re not manually posting to three places, which would feel like overhead. The broadcast happens as a side effect.
But there’s still a cost: format conversion code to maintain, config files to update, webhook failures to debug. The infrastructure pays for itself if the work logs are valuable and the broadcast actually reaches people who benefit. If nobody reads them, or if they read them but don’t act on them, the system is just complexity.
The answer probably varies by team and context. For distributed teams where context-switching cost is high, work log broadcast might be essential. For co-located teams already in sync, it might be overhead. For solo developers, it might be useful for future-you but overkill for present-you.
The system provides capability. Whether that capability is valuable depends on whether information flow is a bottleneck. If your team struggles to stay informed about what’s happening across projects, this solves a real problem. If your team already over-communicates, this adds noise.
Closing thoughts
The work log distribution system is really about reducing context-switching cost. Instead of checking six repositories to understand progress, the information comes to you. Instead of remembering to update Notion and post to Slack, the broadcast happens automatically. Instead of one failure stopping everything, graceful degradation provides partial success.
But the architecture matters as much as the functionality. Modular utilities make the system maintainable. Graceful degradation makes failures observable. Per-project routing makes adoption voluntary. Format conversion makes content readable. These aren’t implementation details - they’re the difference between a system that helps and one that adds complexity without benefit.
The questions the system raises are more interesting than the system itself. When should automation preserve local conventions versus imposing standards? How do you design for graceful degradation without making failures invisible? When does documentation infrastructure improve thinking versus create overhead? How do broadcast systems avoid becoming noise generators?
These tensions don’t resolve. Good tools provide structure while allowing flexibility, standardize gradually without forcing compliance, fail visibly but continue partially, and support their own evolution. The work log distribution system tries to walk these lines. Whether it succeeds depends on whether it reduces friction more than it adds complexity.
This is a personal workflow built with Claude Code’s custom skills - the utilities and skill definitions aren’t public, but the architectural patterns are universal. If you’re building something similar, the hard parts aren’t the webhooks or API calls. They’re the design decisions about graceful degradation, format conversion, and voluntary adoption that determine whether the system actually helps. The modular design - one utility per destination, orchestrator for coordination, JSON config for routing - applies regardless of your specific tech stack or destinations.
Featured writing
When your brilliant idea meets organizational reality: a survival guide
Transform your brilliant tech ideas into reality by navigating organizational challenges and overcoming hidden resistance with this essential survival guide.
Server-Side Dashboard Architecture: Why Moving Data Fetching Off the Browser Changes Everything
How choosing server-side rendering solved security, CORS, and credential management problems I didn't know I had.
AI as Coach: Transforming Professional and Continuing Education
Transform professional and continuing education with AI-driven coaching, offering personalized support, accountability, and skill mastery at scale.
Books
The Work of Being (in progress)
A book on AI, judgment, and staying human at work.
The Practice of Work (in progress)
Practical essays on how work actually gets done.
Recent writing
Infrastructure shapes thought
The tools you build determine what kinds of thinking become possible. On infrastructure, friction, and building deliberately for thought rather than just throughput.
Notes and related thinking
Infrastructure shapes thought
The tools you build determine what kinds of thinking become possible. On infrastructure, friction, and building deliberately for thought rather than just throughput.
AI didn't deskill us, we were already deskilled
This article challenges the narrative that AI is deskilling workers, instead highlighting how many jobs were already mechanical. It offers a thought-provoking perspective on how AI could be an opportunity to reclaim and enhance human skills.