Hacker Newsnew | past | comments | ask | show | jobs | submit | jnpnj's commentslogin

looks like an informal DSL for specs that brings back some quantifiable structure, how many people follow the same ?

also, i wonder if people who did MDD (model driven development) have embedded AI in their methodology


hehe. alternatively dotnil would have sounded closer to dotnet while hinting at lisp terminology and history

    Location: Paris, FR
    Remote: full remote
    Willing to relocate: no
    Technologies: Python, Typescript, Lisps, React, Django, FastAPI, Postgresql, Gitlab/ci, Docker
    Résumé/CV: https://registry.jsonresume.org/jnpn?theme=elegant (github https://github.com/jnpn)
    Email: johan.ponin.pro@gmail.com
    Schedule: Part-time (10/20h per week)
Senior Fullstack Developer | 12+ Years | Seeking Long-Term Collaboration as Freelance

Currently employed but looking to build relationships with teams that value craftsmanship and velocity to grow my skills faster. I'm offering competitive rates ($60-90/hr) prioritizing team culture and interesting problems over maximizing income.

What I bring:

- Taste for type safety

- Pragmatic mindset (planning my solutions for regular delivery and avoid delay or drift)

- Dev tooling expertise (recently: custom Babel AST analyzers for faster debugging)

- Proactive problem-solver who likes to improve processes

Ideal projects:

- Teams with strong engineering culture who build ambitious things cleanly and fast

- Greenfield development or systematic refactoring

- Bonus: Highly formal/industrial model-driven environments and FP/Logic Programming environments (Clojure, Haskell, Prolog)


Very very nice timeline ! some interesting papers from the 90s section.

Maybe they could add https://github.com/VincentToups/emacs-utils/blob/master/mona... . I found J.V. Toups writing very nice, and useful to see how monadic composition could exist without type support.


Some tutorials seem to predate introduction of monad itself, it seemed... )

Well, Moggi's paper on monads is from 91 and wadler first tutorial from 92. Am I missing something ?

I was wondering how people feel about this trend. LLM allow you to free yourself from foundations (frameworks, programmable programs) to just generate any support layer you want from old or new libs. This is all very understandable.. yet I find it a loss, in the lisp world, having a core model and semantics shared by all the upper layers means ease of reuse (for instance people leverage emacs calc classes in other places), llm allows for easier fragmentation..

> llm allows for easier fragmentation..

I also suspect it allows easier consolidation. Moving from a deprecated lib to a new (and better) one for example.

Implementations will likely homogenize a bit as well, but on the other hand boy am I glad not to see an increasing amount of bizarre naïve hand-rolled implementations for some things.


But is the debate about "fleshing out a system spec" or "ability to come up, plan and explore various ideas to solve problems elegantly on a budget" ? I think there's always these two sides conflated as one when discussing LLM impact on users.

Yeah, I don't know why this never popped on my radar. I read Abrash books long ago, MS employees blog too, its history around the creation of UIs (xerox/apple and all that), the OS/2 era .. and I never saw his name (or maybe selective vision tricked me).

Very fun fact


Who else is trying to leverage the situation so that they don't dig their own grave too fast ?

    - I often don't ask the LLM for precompiled answers, i ask for a standalone cli / tool
    - I often ask how it reached its conclusions, so I can extend my own perspective
    - I often ask to describe it's own metadata level categorization too
I'm trying to use it to pivot and improve my own problem solving skills, especially for large code base where the difficulty is not conceptual but more reference-graph size

This is absolutely the proper way to do things. People either being forced to speed-code by KPIs or without the desire to understand what they’re making are missing out on how quickly you can learn and refine using LLMs

I do this sort of stuff too, but more because I have a fundamental mistrust of closed source anything. I don't like opaque binary firmware blobs, and I certainly don't like opaque answer machines, however smart they may be.

The only LLM I would feel comfortable truly trusting is one whose training data, training code, and harness is all open source. I do not mind paying for the costs of someone hosting this model for me.


Apprently there are a lot of innovations hitting market, perovskites left the lab, and tandem cells are above 30%

And plugin solar is being roadblocked everywhere except Utah

https://series.sourceforge.net/ this is the SERIES package you're referring to ? sorry, mostly a CL newb here, it's the first time I read about it

Yes, this is the one I wrote about. I used it quite a bit a long time ago to get more performance. I remember vaguely that the performance was indeed there, but the package wasn't that easy to use and errors were hard to debug.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: