Blog Syndication: Cross-Publishing Blog Posts to Dev.to, Hashnode, and Medium

I recently migrated to Astro based blog that is self-hosted. Most developers have a presence on Dev.to, Hashnode, and Medium. I wanted to syndicate my posts there too and was curious about the automation that exists today in this space.

So I built a small pipeline that handles it automatically. Push a new post to my Astro site, and GitHub Actions cross-publishes it to Dev.to and Hashnode with the canonical URL pointing back to my site. Medium is a different story, which I’ll get to.

Why canonical URLs matter

Before getting into the code, this is the one thing you should care about if you cross-publish anything. Every platform lets you set a canonical URL — canonical_url on Dev.to, originalArticleURL on Hashnode. It’s basically a pointer that says “the original lives on my site.” If you don’t set it, Google sees three copies and will probably rank the platform version higher than yours.

Set the canonical URL. Every time. No exceptions.

Dev.to has a straightforward REST API

Dev.to is the simplest one. You generate an API key at dev.to/settings/extensions, and then it’s just a POST request:

const payload = {
  article: {
    title,
    body_markdown: body,
    published: true,
    tags: tags.slice(0, 4).map(t => t.toLowerCase().replace(/[^a-z0-9]/g, '')),
    canonical_url: canonicalUrl,
  },
};

const res = await fetch('https://dev.to/api/articles', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    'api-key': process.env.DEVTO_API_KEY,
  },
  body: JSON.stringify(payload),
});

With Dev.to, you need make sure your posts are limited to 4 tags, all lowercase, no special characters. You should also ensure you don’t hit their rate limit with too many requests (HTTP 429). The response payload has the article ID and URL. I store this in a tracking file so I don’t publish the same post twice.

Hashnode uses GraphQL

Hashnode’s API is GraphQL-based. You need a Personal Access Token from hashnode.com/settings/developer and your publication ID. If you know your blog URL, you can get the publication ID without even logging in:

curl -s -X POST https://gql.hashnode.com \
  -H "Content-Type: application/json" \
  -d '{"query":"{ publication(host:\"yourblog.hashnode.dev\") { id title } }"}'

The publish mutation looks like this:

const mutation = `
  mutation PublishPost($input: PublishPostInput!) {
    publishPost(input: $input) {
      post { id url }
    }
  }
`;

const variables = {
  input: {
    title,
    contentMarkdown: body,
    publicationId,
    tags: tags.map(t => ({
      name: t,
      slug: t.toLowerCase().replace(/[^a-z0-9]+/g, '-'),
    })),
    originalArticleURL: canonicalUrl,
    slug,
  },
};

Hashnode tags are objects with both a name and a slug, which is a little more involved than Dev.to’s plain strings. The originalArticleURL field is their version of the canonical URL.

Medium dropped support for API tokens

You can’t programmatically publish to Medium anymore. At least not through any official channel.

But you can still get your posts on Medium manually with the canonical URL intact. Medium has an “Import a story” feature that does exactly this:

  1. Go to medium.com/me/stories.
  2. Click the Import a story button.
  3. Paste your post’s URL (e.g. https://www.nvarma.com/blog/your-post-slug/).
  4. Medium will import the content and automatically set the canonical URL to point back to your blog post.

That last part is the important bit. Medium links to the original URL when you use the import tool, so search engines still know where the post originated. While this manual part sucks, I’m glad they still have a way to do this less painfully.

For older posts you want to bring over, same process. Just import each one by URL. Medium will pull the content, preserve your formatting reasonably well, and set the canonical link. You might need to clean up some formatting afterward, so proof read them before publishing.

Sanitizing MDX for other platforms

Some of my posts use custom image components and are written in MDX. MDX generated content does not work on Dev.to or Hashnode.

I wrote a sanitizer that transforms the content into portable markdown:

// Strip MDX imports
content = content.replace(/^import\s+.*$/gm, '');

// Convert <figure>/<img>/<figcaption> to markdown
content = content.replace(
  /<figure[^>]*>\s*<img\s+src="([^"]+)"\s+alt="([^"]*)"[^>]*\/?>\s*(?:<figcaption>([\s\S]*?)<\/figcaption>)?\s*<\/figure>/g,
  (_match, src, alt, caption) => {
    let result = `![${alt}](${resolveUrl(src)})`;
    if (caption) result += `\n*${caption.trim()}*`;
    return result;
  }
);

// Replace Astro components with "see original" links
content = content.replace(
  /<([A-Z][A-Za-z]+)[^>]*\/?>/g,
  (_match, componentName) => {
    return `*[Interactive ${componentName} — see original post](${canonicalUrl})*`;
  }
);

// Resolve relative paths to absolute URLs
content = content.replace(
  /!\[([^\]]*)\]\((?!https?:\/\/)([^)]+)\)/g,
  (_match, alt, src) => `![${alt}](${SITE_URL}${src})`
);

The Astro component replacement is my favorite part. If I have a <BeforeAfterCarousel /> in my Astro rebuild post, the cross-published version gets a link that says “Interactive BeforeAfterCarousel - see original post” instead of broken HTML. Not perfect, but it’s honest and sends people to the real thing.

I also prepend each post with a “Originally published on nvarma.com” header and append a footer with a link back. A little self-promotional, but that’s kind of the whole point of cross-publishing.

Automating it with GitHub Actions

The workflow triggers whenever I push changes to src/content/blog/** on main:

name: Cross-Publish Blog Posts

on:
  push:
    branches: [main]
    paths:
      - 'src/content/blog/**'
  workflow_dispatch:
    inputs:
      post_id:
        description: 'Specific post ID to publish (filename without extension)'
        required: false

The workflow_dispatch trigger lets me manually publish a specific post if I need to. The script reads all blog posts and then checks a tracking JSON file to see what’s already been published. After that, it only processes new posts. It also skips posts older than 30 days to avoid flooding those syndication sites with old content.

The tracking file gets committed back to the repo automatically, so there’s a record of what went where:

{
  "2026-02-09-manager-ic-pendulum": {
    "title": "The Manager-IC Pendulum...",
    "firstPublishedAt": "2026-02-10T03:33:00Z",
    "platforms": {
      "devto": {
        "id": "3245645",
        "url": "https://dev.to/navinvarma/the-manager-ic-pendulum...",
        "publishedAt": "2026-02-10T03:33:00Z"
      },
      "hashnode": {
        "id": "abc123",
        "url": "https://navinvarma.hashnode.dev/the-manager-ic-pendulum...",
        "publishedAt": "2026-02-10T04:00:00Z"
      }
    }
  }
}

Setting it up yourself

Dev.to: Generate an API key at Settings > Extensions. Store it as DEVTO_API_KEY in your repo’s GitHub Actions secrets.

Hashnode: Get a Personal Access Token from Settings > Developer. Look up your publication ID with the curl command from above. Store them as HASHNODE_PAT and HASHNODE_PUBLICATION_ID respectively.

Medium: Import stories manually using Medium’s import tool. Paste the canonical URL and Medium imports the content of your post.

GitHub Actions secrets: Go to your repo Settings > Secrets and variables > Actions, and add each one. The workflow only runs when blog content changes, so it won’t burn through your Actions minutes.

If you already have posts on a platform (like I did with Dev.to), make sure the tracking JSON has those entries before your first run. Be careful here as the script will try to publish duplicates and APIs will likely reject them or create duplicates.

Reflections

Setting this up took an evening. Not too much work, but you need to know your way around how Astro builds work and some general GitHub Actions, API & Integrations knowledge.

The nice part is that I now have a single workflow: write in markdown, push to git, and my post shows up on three platforms with proper canonical URLs. Medium requires a manual import but at least the process is simple and the canonical URL is preserved.

I’ll probably add more platforms later if they have decent APIs. This was a fun project to automate away some of my toil, hoping you might find this useful.