19 comments

  • f33d5173 3 minutes ago
    I don't know what such a demo would prove in the first place. LLMs are good at things that they have been trained on, or are analogues of things they have been trained on. SVG generation isn't really an analogue to any task that we usually call on LLMs to do. Early models were bad at it because their training only had poor examples of it. At a certain point model companies decided it would be good PR to be halfway decent at generating SVGs, added a bunch of examples to the finetuning, and voila. They still aren't good enough to be useful for anything, and such improvements don't lead them to be good at anything else - likely the opposite - but it makes for cute demos.

    I guess initially it would have been a silly way to demonstrate the effect of model size. But the size of the largest models stopped increasing a while ago, recent improvements are driven principally by optimizing for specific tasks. If you had some secret task that you knew they weren't training for then you could use that as a benchmark for how much the models are improving versus overfitting for their training set, but this is not that.

  • ericpauley 3 hours ago
    Going to have to disagree on the backup test. Opus flamingo is actually on the pedals and seat with functional spokes and beak. In terms of adherence to physical reality Qwen is completely off. To me it's a little puzzling that someone would prefer the Qwen output.

    I'd say the example actually does (vaguely) suggest that Qwen might be overfitting to the Pelican.

    • wongarsu 2 hours ago
      Qwen's flamingo is artistically far more interesting. It's a one-eyed flamingo with sunglasses and a bow tie who smokes pot. Meanwhile Opus just made a boring, somewhat dorky flamingo. Even the ground and sky are more interesting in Qwen's version

      But in terms of making something physically plausible, Opus certainly got a lot closer

      • kmacdough 1 hour ago
        Given adherence is a more significant practical barrier, it's probably the better signal. That is, if we decide too look for signal here.
    • tecoholic 53 minutes ago
      Even the first one - Qwen added extra details in the background sure. But he Pelican itself is a stork with a bent beak and it's feet is cut off it's legs. While impressive for a local model, I don't think it's a winner.
      • mejutoco 24 minutes ago
        Did you see opus bike though for that same test? I know it is about the flamingo but that is bad.
    • irthomasthomas 6 minutes ago
      It's a 3B model. It should not be this close. Debating the artistic qualities in detail is missing the point.
  • mentalgear 2 hours ago
    I understand the 'fun factor' but at this point I really wonder what this pelican still proofs ? I mean, providers certainly could have adapted for it if they wanted, and if you want to test how well a model adapts to potential out of distribution contexts, it might be more worthwhile to mix different animals with different activity types (a whale on a skateboard) than always the same.
    • simonw 2 hours ago
      That's why I did the flamingo on a unicycle.

      For a delightful moment this morning I thought I might have finally caught a model provider cheating by training for the pelican, but the flamingo convinced me that wasn't the case.

      • akavel 1 hour ago
        r/LocalLlama is now doing a horse in a racing car:

        https://redd.it/1slz38i

      • furyofantares 1 hour ago
        It is completely wild to me that you prefer Qwen's flamingo. I think it's really bad and Opus' is pretty good.
        • simonw 1 hour ago
          The Opus one doesn't even have a bowtie.
          • furyofantares 1 hour ago
            The Opus one looks like a flamingo, and looks like it's riding the unicycle. Sitting on the seat. Feet on the pedals.

            The Qwen one looks like a 3-tailed, broken-winged, beakless (I guess? Is that offset white thing a beak? Or is it chewing on a pelican feather like it's a piece of straw?) monstrosity not sitting on the seat, with its one foot off the pedal (the other chopped off at the knee) of a malmanufactured wheel that has bonus spokes that are longer than the wheel.

            But yeah, it does have a bowtie and sunglasses that you didn't ask for! Plus it says "<3 Flamingo on a Unicycle <3", which perhaps resolves all ambiguity.

          • monksy 16 minutes ago
            Game over opus
      • prodigycorp 2 hours ago
        To me the opus flamingo is waaaay better than the qwen one. qwen has the better pelican, though.
      • dude250711 2 hours ago
        Is a flamingo on a unicycle not merely a special case of a pelican on a bicycle?
    • stephbook 31 minutes ago
      They're certainly aware of the test, but a turtle doing a kickflip on a skateboard? I seriously doubt they train their models for that.

      https://x.com/JeffDean/status/2024525132266688757

      If anything, the disastrous Opus4.7 pelican shows us they don't pelicanmaxx

    • BoorishBears 17 minutes ago
      This is a gag that's long outlived its humor, but we're in a space so driven by hype there are people who will unironically take some signal from it. They'll swear up and down they know it's for fun, but let a great pelican come out and see if they don't wave it as proof the model is great alongside their carwash test.
  • jbellis 2 hours ago
    For coding, qwen 3.6 35b a3b solved 11/98 of the Power Ranking tasks (best-of-two), compared to 10/98 for the same size qwen 3.5. So it's at best very slightly improved and not at all in the class of qwen 3.5 27b dense (26 solved) let alone opus (95/98 solved, for 4.6).
    • kristianp 23 minutes ago
      This has similar problems to swe bench in that models are likely trained on the same open source projects that the benchmark uses.

      https://blog.brokk.ai/introducing-the-brokk-power-ranking/

    • __natty__ 1 hour ago
      You compare tiny modal for local inference vs propertiary, expensive frontier model. It would be more fair to compare against similar priced model or tiny frontier models like haiku, flash or gpt nano.
      • javawizard 1 hour ago
        Not when the article they're commenting on was doing literally exactly the same thing.
      • ericd 1 hour ago
        Eh it’s important perspective, lest someone start thinking they can drop $5k on a laptop and be free of Anthropic/OpenAI. Expensive lesson.
  • wood_spirit 1 hour ago
    Such a disconnect from the minutes I’ve lost and given up on Gemini trying to get it to update a diagram in a slide today. The one shot joke stuff is great but trying to say “that is close but just make this small change” seems impossible. It’s the gap between toy and tool.
  • VHRanger 2 hours ago
    That's not surprising; Opus & Sonnet have been regressing on many non-coding tasks since about the 4.1 release in our testing
  • sailingcode 1 hour ago
    I'm an iguana and need to wash my bicycle in the carwash. Shall I walk or take the bus?
    • layer8 1 hour ago
      You should have the pelican ride it to the carwash and wash it for you.
    • DANmode 1 hour ago
      That’s a long walk! You should reserve a ride with $PartnerRideshareCo.
  • bottlepalm 56 minutes ago
    I really wish they spent some time training for computer use. This model is incapable of finding anywhere near the correct x,y coordinate of a simple object in a picture.
  • nba456_ 7 minutes ago
    Good reminder that these tests have always been useless, even before they started training on it.
  • justinbaker84 23 minutes ago
    I love this benchmark!
  • refulgentis 15 minutes ago
    I liked both of Opus' better, it was very illuminating, in both cases I didn't see the error's Simon saw and wondered why Simon skipped over the errors I saw.

    Pelican: saturated!

  • comandillos 3 hours ago
    I've been using Qwen3.5-35B-A3B for a bit via open code and oMLX on M5 Max with 128Gb of RAM and I have to say it's impressively good for a model of that size. I've seen a huge jump in the quality of the tool calls and how well it handles the agentic workflow.
    • iib 2 hours ago
      This is about the newly release Qwen3.6. Just wanted to make sure you got that correctly.
  • aliljet 2 hours ago
    I'm really curious about what competes with Claude Code to drive a local LLM like Qwen 3.6?
  • lofaszvanitt 1 hour ago
    That Qwen flamingo on the unicycle is actually quite good. A work of art.
  • simon_is_genius 44 minutes ago
    Great analysis
  • jedisct1 1 hour ago
    I'm currently testing Qwen3.6-35B-A3B with https://swival.dev for security reviews.

    It's pretty good at finding bugs, but not so good at writing patches to fix them.

  • JaggerFoo 1 hour ago
    FYI, using a 128GB M5 MacBook Pro, sourced from another article by the author.
  • throwuxiytayq 1 hour ago
    I literally cannot believe that people are wasting their time doing this either as a benchmark or for fun. After every single language model release, no less.
    • sharkjacobs 1 hour ago
      It feels like the results stopped being interesting a little while ago but the practice has become part of simonw's brand, and it gives him something to post even when there is nothing interesting to say about another incremental improvement to a model, and so I don't imagine he'll stop.
      • stephbook 28 minutes ago
        I, for one, expected progress. Uneven, sometimes delayed, but ever increasing progress.

        But that Opus pelican?

    • segmondy 26 minutes ago
      I can't believe you're such a party pooper. It's exciting times, the silly things do matter!
  • 19qUq 2 hours ago
    How about switching to MechaStalin on a tricycle? It gets kind of boring.
    • mvanbaak 1 hour ago
      boring ... the ways all the models fail at a simple task never gets boring to me