Step 3
Axo delivers curated, stack-specific AI tactics straight to your team's Slack, so adoption happens naturally.
A preview of real Axo posts. Each one will be tailored to your stack and includes tactics your team can try right away.
Switch the channel to preview different content
Claude Code now supports scheduled cloud tasks
Claude Code's scheduled tasks can now run on cloud infrastructure, so they execute even when your machine is off.
Setup: From the Claude Desktop app, go to "Scheduled tasks" or type /schedule in any existing session. You can also set them up on the web at claude.ai/code . The creation dialog lets you configure:
• Name for the task • Prompt describing what Claude should do • Repo(s) to run against (with a Cloud execution badge) • Frequency and time (e.g. daily at 9:00 AM) • Knowledge/connectors to include (Slack, Google Drive, etc.)
Example from the demo: A task named review-open- that runs daily at 9 AM, looks at everything merged to main since the previous evening, writes a brief on what shipped, flags risky changes, and posts it to #eng-standup on Slack. Five bullets, ready before coffee.
This is a big step up from session-scoped /loop tasks, which only ran while Claude Code was active in your terminal. Scheduled cloud tasks are persistent and run unattended.
Source: Noah Zweben (@noahzweben)
Prompting GPT-5.4 for better frontend output
OpenAI published a guide on steering GPT-5.4 toward polished, intentional frontends instead of generic SaaS layouts. The core idea: underspecified prompts produce overrepresented training patterns. Tighter constraints produce better design.
Four quickstart practices:
1. Start at low reasoning level (medium for ambitious work). Higher reasoning can lead to overthinking on simpler sites. 2. Define your design system upfront: typography, color palette, layout constraints. 3. Attach visual references or a mood board. The model can now generate its own mood board first, then use those assets in the build. 4. Structure the page as a narrative (hero, support, detail, social proof, CTA) rather than letting the model decide.
What changed in the model:
• Native image search and generation during the design process. You can instruct it to generate visuals and reuse them across the build. • Trained for computer use. Paired with Playwright, it can inspect rendered pages, test viewports, and verify its own work visually. • More functionally complete apps over long-horizon tasks.
The starter prompt they recommend defines hard rules like "one composition per first viewport," "no cards by default," "full-bleed hero only," and "brand must be hero-level signal, not just nav text." It's worth copying directly into your system prompt or skill file:
## Frontend tasks
When doing frontend design tasks, avoid generic, overbuilt layouts.
**Use these hard rules:**
- One composition: The first viewport must read as one composition, not a dashboard.
- Brand first: The brand or product name must be a hero-level signal.
- Typography: Use expressive, purposeful fonts and avoid default stacks (Inter, Roboto, Arial, system).
- Full-bleed hero only on landing pages.
- Cards: Default no cards. Cards only when they are the container for a user interaction.
- One job per section.
They also published a full frontend-skill with detailed rules for composition, motion, imagery, and copy. Installable in Codex with $skill-installer frontend-skill.
Source: OpenAI Developers (@OpenAIDevs)
Claude Code Channels: Control Your Session from Telegram or Discord
Anthropic just shipped Claude Code Channels — you can now connect Telegram or Discord to a running Claude Code session via MCP servers. Basically, message Claude Code from your phone and get responses back in the same chat thread.
Under the hood, a channel is an MCP server that pushes events (like incoming messages) into your active session. Channels can be one-way (alerts only) or two-way (Claude reads and replies). Events show up in your terminal as XML, and replies route back to the messaging platform.
Requirements
• Claude Code v2.1.80+ (check with claude --version)
• Logged in via claude.ai (API key auth doesn't work here)
• Team/Enterprise orgs need channelsEnabled set to true in admin settings
Telegram setup
1. Create a bot via @BotFather on Telegram (/newbot)
2. Install the plugin inside Claude Code:
/plugin install telegram@claude-plugins-official
1. Add your bot token to .claude/channels/telegram/.env
2. Restart Claude Code with the channels flag:
claude --channels telegram
Discord follows a similar pattern using its own plugin from the same official repo.
Worth noting
• Events only arrive while the session is running. Use tmux or screen if you want always-on behavior.
• The --channels flag is required every time you start a session with channels enabled.
• This is a research preview — the protocol and flag syntax may change.
• You can build custom channels for other platforms using the MCP SDK by advertising the claude/channel capability.
Source: Thariq (@trq212)
Custom Subagents in Codex
You can define your own specialized subagents as standalone TOML files. They run in parallel with their own model and instructions, and Codex merges everything into a single response when they're done.
Put them in one of two places:
• ~/.codex/agents/ for personal agents
• .codex/agents/ for project-scoped agents
Each file defines one agent. Three fields are required: name, description, and developer_instructions. You can also set model, model_reasoning_effort, sandbox_mode, mcp_servers, etc.
Here's a read-only PR reviewer agent (.codex/agents/reviewer.toml):
name = "reviewer"
description = "PR reviewer focused on correctness, security, and missing tests."
model = "gpt-5.4"
model_reasoning_effort = "high"
sandbox_mode = "read-only"
developer_instructions = """
Review code like an owner.
Prioritize correctness, security, behavior regressions, and missing test coverage.
Lead with concrete findings, include reproduction steps when possible, and avoid style-only comments unless they hide a real bug.
"""
nickname_candidates = ["Atlas", "Delta", "Echo"]
For global settings, add an [agents] section to your config.toml:
[agents]
max_threads = 6 # concurrent agent cap
max_depth = 1 # nesting depth (keep at 1 unless you need recursive delegation)
Some notes:
• Codex ships with three built-in agents: default, worker, and explorer. Custom agents with the same name will override them.
• Subagents inherit your current sandbox policy, but you can override it per-agent (e.g., sandbox_mode = "read-only" for explorers).
• Each subagent does its own model and tool work, so these workflows burn more tokens than single-agent runs.
• nickname_candidates gives spawned agents distinct display names in the UI — useful when you're running many instances of the same type.
• There's also an experimental spawn_agents_on_csv tool for batch work: you give it a CSV where each row is a task, it spins up one worker per row, and exports results back to CSV.
If you want to try it on a PR:
Review this branch against main. Spawn one agent per concern, wait for all, and summarize:
1. Security issues
2. Code quality
3. Bugs
4. Race conditions
5. Test flakiness
6. Maintainability
The docs have two full multi-agent walkthroughs (PR review and frontend debugging) with complete TOML files: Codex Subagents docs
Source: Nick (@nickbaumann_)
How Anthropic Uses Claude Code Skills Internally
Anthropic posted a detailed breakdown of how they use Skills across their team — they have hundreds in active use at this point. Lots of good stuff in here on skill types, writing advice, and how they distribute them internally.
9 Categories of Skills
These are the recurring types they've identified, roughly mapped to different parts of a team's workflow:
1. Library & API Reference — Teach Claude your internal libraries, CLIs, and SDKs. Include reference snippets and gotcha lists. 2. Product Verification — Pair with tools like Playwright or tmux to verify Claude's output actually works. They suggest having an engineer spend a full week making verification skills excellent. 3. Data Fetching & Analysis — Connect to your data stack with credentials, dashboard IDs, and common query patterns. 4. Business Process Automation — Automate standups, ticket creation, weekly recaps. Store previous results in log files so Claude can reflect on past runs. 5. Code Scaffolding & Templates — Generate boilerplate with natural language requirements baked in. 6. Code Quality & Review — Run as hooks or in GitHub Actions. One example: spawn a "fresh-eyes subagent" to critique code, fix issues, and iterate until findings are nitpicks. 7. CI/CD & Deployment — Monitor PRs, retry flaky CI, resolve merge conflicts, gradual traffic rollout with auto-rollback. 8. Runbooks — Take a symptom (alert, error, Slack thread), walk through investigation, produce a structured report. 9. Infrastructure Operations — Find orphaned resources, manage dependencies, investigate cost spikes.
Key Writing Tips
• Don't state the obvious. Claude already knows a lot about coding. Focus on information that pushes it out of its default patterns.
• The Gotchas section is the highest-signal content in any skill. Update it over time as Claude hits new failure modes.
• Use the file system for progressive disclosure. A skill can be a whole folder — point Claude to references/api.md, assets/template.md, or helper scripts so it reads them only when relevant.
• Avoid railroading. Give Claude information, but let it adapt to the situation. Overly specific instructions break reusability.
• The description field is for the model, not humans. Claude scans skill descriptions to decide which one to invoke, so write it as a trigger condition rather than a summary.
• Store data for memory. Use append-only logs, JSON files, or even SQLite so Claude can read its own history across sessions. Store in ${CLAUDE_PLUGIN_DATA} to survive upgrades.
• Include scripts and libraries. Give Claude composable helper functions so it spends turns on decisions instead of reconstructing boilerplate.
• Use on-demand hooks. Skills can register hooks that only activate when invoked. Example: a /careful skill that blocks rm -rf, DROP TABLE, and force-push only when touching prod.
Distribution
For smaller teams, checking skills into your repo under .claude/skills/ works fine. Larger orgs can run an internal plugin marketplace where teams upload and install plugins via /plugin marketplace add.
Anthropic's own marketplace process is pretty organic: post a skill in a sandbox folder, share it in Slack, and once it gets traction, PR it into the marketplace. They track usage with a PreToolUse hook to find undertriggering or popular skills.
Docs: Claude Code Skills | Hooks | Plugins
Source: Thariq (@trq212)
Stop Claude Code from ignoring your CLAUDE.md
Claude Code wraps your CLAUDE.md in a <system_reminder> tag that tells the model the contents "may or may not be relevant." The longer the file gets, the more Claude treats individual sections as optional. Even well-written instructions get skipped, especially for task-specific rules like testing conventions or API patterns.
The fix is to wrap task-specific sections in <important if="condition"> tags.
<important if="you are writing or modifying tests">
- Use `createTestApp()` helper for integration tests
- Mock database with `dbMock` from `packages/db/test`
- Test fixtures live in `__fixtures__/` directories
</important>
The idea is pretty straightforward — explicit conditions give Claude a clearer signal about when to apply instructions instead of leaving it to guess what's relevant. The team at hlyr.dev reports noticeably better adherence on tasks where only some CLAUDE.md sections should apply.
So what should you wrap? Not everything. Project identity, directory structure, tech stack — leave those alone. They're relevant to basically every task. Testing setup, deployment procedures, API conventions, domain-specific rules that only matter sometimes — those are good candidates.
One thing that matters: make the conditions narrow. <important if="you are writing code"> matches almost everything and defeats the purpose. Something like "you are writing or modifying tests" or "you are building API endpoints" actually gives Claude a meaningful filter.
You can also automate this. There's a Claude Code skill that will restructure your CLAUDE.md — it separates out foundational content, wraps the domain-specific stuff in conditional blocks, and cleans up anything stale or vague:
npx skills add dexhorthy/slopfiles --skill improve-claude-md
Then run /improve-claude-md in your project. Source on GitHub .
Check what it spits out before committing it, but it's a decent starting point if your CLAUDE.md has gotten out of hand. The full blog post goes into the reasoning more.
Source: dex (@dexhorthy)
Prompt snippets for coding agents from Notion's Simon Last
Simon Last (co-founder of Notion) shared three prompt patterns he uses when working with coding agents. Each one targets a specific failure mode.
Simplifying specs
Please step back and think again. How can we make this SIMPLER and DUMBER while still achieving our goals?
Useful when iterating on a spec or design doc with an agent. LLMs tend to over-engineer solutions, and this nudge pushes them toward the simplest version that works. Worth dropping into your CLAUDE.md or system prompt as a recurring check.
Planning before coding
<goal here>
Please research the codebase to make sure you fully understand how stuff currently works, and make a detailed plan to achieve the above.
Your plan must be fully fleshed out and include only concrete actions such as editing files and running commands (ie. your plan cannot include actions like "read this file").
Your plan must also include how you will verify it is correct yourself, end-to-end.
Don't make any code changes yet. Research now as much as you need and then chat back with your full plan when ready.
Two things stand out here:
• The "only concrete actions" constraint prevents the agent from padding the plan with vague steps like "understand the module." Every step has to be an edit or a command. • Requiring the agent to describe how it'll verify correctness forces it to think about end-to-end testing upfront, not as an afterthought.
This maps well to the "plan then execute" pattern recommended in Anthropic's Claude Code best practices .
Debugging
Please think carefully and truth seek. Don't just blindly re-run things and hope they will work. Read the code, and liberally add temporary logging statements anywhere in the codebase to verify logic works as you expect (just remember to not commit them). Dig deep and take the time to really figure out the root causes of problems.
This fights the most common agent debugging failure: blindly retrying or making surface-level fixes without understanding the root cause. Explicitly giving the agent permission to "liberally add temporary logging statements" matters here — it's a concrete debugging strategy rather than just telling it to "think harder."
All three are less about telling the agent what to build and more about constraining how it works, which in my experience is where you actually get mileage out of these kinds of prompts.
Source: Simon Last (@simonlast)
Banning useEffect as a guardrail for AI-generated React code
Factory shared their rule: no direct useEffect in their codebase. The reasoning is practical — and it turns out to also help a lot when AI coding agents are writing your React, since they're notoriously bad at reasoning about the render loop.
useEffect is implicit synchronization logic. Dependency arrays hide coupling, create infinite loop risks, and make control flow hard to trace. Agents make this worse because they love to reach for useEffect "just in case," and you end up with tangled effect chains that are hard to untangle.
Here are the five replacement patterns they use:
1. Derive state inline. If filteredProducts is just products.filter(...), compute it directly — don't sync it with an effect.
2. Use a data-fetching library (React Query, SWR) for anything that's currently useEffect + fetch + setState. You get cancellation, caching, and race condition handling for free.
3. Put logic in event handlers. If something happens because a user clicked a button, do the work in onClick. Effects shouldn't be your default.
4. Wrap useEffect(cb, []) in a named useMountEffect hook for one-time external sync (DOM integration, third-party widgets). The name makes the intent obvious.
5. Use key to reset components. If a component should start fresh when an ID changes, <Component key={id} /> forces a remount — way cleaner than wiring up dependencies to re-sync state.
This is especially relevant right now because of AI agents. As @dexhorthy noted, they had to rip out and rebuild a big chunk of their frontend after agents produced a mess of useEffect chains. Banning the hook pushes you toward declarative patterns that are easier for both humans and agents to get right.
React's own docs basically say the same thing — their You Might Not Need an Effect guide covers most of these patterns.
To enforce this:
• Add an ESLint rule via no-restricted-syntax to flag direct useEffect calls
• Document the policy in your AGENTS.md or equivalent agent instructions file so coding agents follow it too
• The one legitimate escape hatch (useEffect(cb, [])) should live in a named useMountEffect hook
Source: dex (@dexhorthy)
Codex Subagents: Forked vs. Fresh Context
OpenAI's Codex app now supports subagents — you can spin up child agents to handle subtasks in parallel. Nick Baumann has been using this a bunch, and the thing that jumped out from his workflow: whether the subagent inherits the parent's context changes how useful it is.
There are two modes, and they're good for different things:
Forked context inherits the parent's context. This is great for offloading rote work from your main thread without losing continuity. Say you need a subagent to run a local server or grind through a repetitive task — it carries forward what it needs, and your parent's context window stays clean.
Fresh context starts with nothing from the parent. The best use case here is code review. You get a subagent that reviews your branch without any of the biases that built up while you were implementing. Genuinely useful blank slate.
Codex infers which mode to use automatically, but you can be explicit:
hey codex spawn a subagent to review my branch before we post a PR
If you ask for an "unbiased review" or similar, that signals Codex to skip the parent context and start fresh.
A couple of config options worth knowing about in config.toml:
[agents]
max_threads = 6 # max concurrent subagent threads (default: 6)
max_depth = 1 # nesting depth for spawned threads (default: 1)
This isn't unique to Codex, either. Claude Code has /agents for spawning isolated subagents, and people have built similar multi-agent wrappers around Codex CLI. Everyone's arriving at the same place: context windows fill up fast, and subagents are the pressure valve.
Source: Nick (@nickbaumann_)
"Make Interfaces Feel Better" as an Agent Skill
Jakub Krehel wrote a detailed article on small UI details that compound into great interfaces and then turned all of the tips into a reusable agent skill you can install for Claude Code, Codex, Cursor, and other AI coding agents.
Install it:
npx skills add jakubkrehel/make-interfaces-feel-better
Then invoke it with /make-interfaces-feel-better in your agent to apply the principles to whatever UI you're building.
Here's what the skill covers:
• text-wrap: balance / pretty to avoid orphaned words
• Concentric border radius for nested elements (outer = inner + padding)
• Contextual icon animations with opacity, scale, and blur
• Crisper macOS text via -webkit-font-smoothing: antialiased
• font-variant-numeric: tabular-nums so dynamic numbers don't jump around
• Interruptible animations — CSS transitions for interactions, keyframes for one-shot sequences
• Split and stagger patterns for enter animations
• Subtle exit animations: fixed small offset instead of full-height movement
• Optical vs. geometric alignment for icons and buttons
• Multi-layer box-shadow instead of borders for depth
• A simple outline: 1px solid rgba(0,0,0,0.1) on images for subtle edge definition
The original article has interactive demos for each technique, with code snippets using both CSS and Motion . Worth reading even without the skill.
npx skills is a CLI that auto-detects which agents you have installed and drops skill files into the right directories. You can use -g for global install or -a to target specific agents. Docs on the skills CLI .
Source: Jakub Krehel (@jakubkrehel)
A free weekly email with the same curated AI tactics Axo sends to Slack, ready to try. No account needed.
Instead of forwarding links nobody clicks, Axo drops actionable tactics right where your team already works. Show, don't tell.
Your team reacts, threads, and decides what to try together. Adoption happens naturally, not in a mandatory all-hands.
Every update includes a prompt or snippet your team can test in minutes, not a blog post they'll bookmark and forget. Tactics that spread through your team on their own.
Choose what tools to cover, how often to post, and approve every update before it reaches your team.
Start with a free trial. Cancel anytime.
For startups and growing teams ready to level up on AI.
For organizations that need scale and flexibility.
One email a week with prompts and snippets your team can use right away. Always free.