Authexis work log — 2026-03-26
What shipped today
Hashtag pipeline — seed, edit, generate
The social queue now has a complete hashtag pipeline. The first piece was the seeding logic: when content gets added to the social queue, addContentToQueue now extracts any inline #hashtags baked into the LLM-generated post body, strips them from the visible text, and writes them to the social_posts.hashtags column. If the post has no inline hashtags, it falls back to the source content’s tags array formatted as #tag. This resolves the mystery of “my posts already have hashtags” — they were baked inline by the LLM all along, and now they’re a proper first-class field.
The second piece was the UI: every social queue post card now has an inline HashtagEditor component that renders chips with × delete and a + add input. Users can add or remove hashtags without leaving the queue. The third piece was AI generation: a new engine handler social_post.generate_hashtags prompts Claude for 3–5 relevant hashtags given the post body and source content context, writing the result to social_posts.hashtags. addContentToQueue dispatches this command automatically when no hashtags were seeded from inline tags or content tags. Together these three changes (#1742, #1743, #1744) close the loop on hashtag management.
Guided content refresh
Content drift is a real problem — a piece written three months ago about a pending regulation is wrong the moment the regulation passes. The new “Refresh content” flow gives users a frictionless way to handle this. A “Refresh content →” link now appears on content in review or final status. Clicking it expands an inline panel (no modal) where the user describes what changed. On submit, the refresh note is persisted to a new contents.refresh_notes column, a content.refresh_requested event is logged in history, and three content.redo_field commands are queued for draft, intro, and title — each carrying the user’s note as revision context. The existing redo_field handler already supported a revision_notes payload that prepends a “REVISION REQUESTED:” header before generation, so no engine changes were needed. One migration (#1736).
Image work — shipped then reverted
A Vercel Blob upload route and ImageUploader component were built and shipped (#1745, #1746), then reverted after realizing Vercel Blob hasn’t been provisioned for this project. The code was clean and the approach was sound, but pushing a /api/upload/image endpoint that returns 500 in prod was the wrong call. The issues are reopened and parked until the Blob store is set up.
Pipeline housekeeping
Continued running /issue next --auto throughout the day. #1735 (hashtags, too big) was decomposed into three children; #1737 (image uploads, even bigger) was decomposed into three children with a clear dependency order. Several issues were batch-prepped in background agents running in parallel. The pipeline is in good shape heading into tomorrow with a clear queue.
Completed
- #1722 — distribution queue guardrails (previous session, shipped at session start)
- #1736 — guided “refresh outdated content” flow
- #1742 — seed hashtags from content tags when adding to social queue
- #1743 — inline hashtag editor on social queue post cards
- #1744 — AI-generate hashtags engine command (
social_post.generate_hashtags) - #1745 — Vercel Blob upload route + ImageUploader (shipped and reverted)
- #1746 — wire ImageUploader into content detail (shipped and reverted)
Release progress
- v1.5 — 49 closed / 1 open. One lingering backlog item, milestone effectively complete.
Carry-over
- #1747 (
ready-for-prep) — Addimage_urlto social posts: DB column, queue UI, and platform publishing. Needs prep before it’s grindable. - #1745 (
ready-for-prep) — Vercel Blob setup. Blocked on provisioning a Blob store in the Vercel dashboard (Storage → Create → Blob). OnceBLOB_READ_WRITE_TOKENis in env, this can be re-executed quickly — the code was already written. - #1746, #1738 — both
blockedon #1745.
Risks
- Image upload blocked until Blob is provisioned. Three issues (#1745, #1746, #1738) are stacked behind this. The Vercel dashboard action is manual and takes 2 minutes, but it hasn’t happened yet.
BLOB_READ_WRITE_TOKENabsent from all environments.vercel env lsconfirms no blob token exists. If someone tries to run the upload route (still referenced in git history), it will 500.
Flags and watch-outs
- Tester (Jeffrey) received a spurious “outline is ready” email — root cause was a stale
notify_content_ready(stage_name="outline")call left over from when outline was a visible stage. Fixed in a previous session by removing the call and adding the correctinterview_readynotification instead. - Hashtags are a
TEXTcolumn (space-separated string like"#tag1 #tag2"), notTEXT[]. The issue spec said TEXT[] but the API and DB use plain text. Future work should keep this consistent.
Next session
- Provision Vercel Blob — go to Vercel dashboard → authexis project → Storage → Create Blob store. Run
vercel env pullafter to getBLOB_READ_WRITE_TOKENlocally. Then re-execute #1745 (fast, code already written in git history). - Prep #1747 (
social_posts.image_url+ platform publishing) — big issue, needs careful scoping especially for LinkedIn’s Assets API. - Continue
/issue next --auto— pipeline hasready-for-prepissues to chew through.
Why customer tools are organized wrong
This article reveals a fundamental flaw in how customer support tools are designed—organizing by interaction type instead of by customer—and explains why this fragmentation wastes time and obscures the full picture you need to help users effectively.
Infrastructure shapes thought
The tools you build determine what kinds of thinking become possible. On infrastructure, friction, and building deliberately for thought rather than just throughput.
Server-side dashboard architecture: Why moving data fetching off the browser changes everything
How choosing server-side rendering solved security, CORS, and credential management problems I didn't know I had.
The work of being available now
A book on AI, judgment, and staying human at work.
The practice of work in progress
Practical essays on how work actually gets done.
Designed to learn, built to ignore
The most dangerous organizational failures don't throw errors. They look fine, return results, and quietly stay frozen at the moment of their creation.
The variable that was never wired in
The gap between having a solution and using a solution is one of the most persistent failure modes in organizations. You see the escaped variable. You see the risk register. You assume the work is done.
Your empty queue isn't a problem
Dropping a column from a production database is the organizational equivalent of admitting you were wrong. Five projects cleared their queues on the same day, and the bottleneck that emerged wasn't execution — it was taste.