haberman a day ago | next |

> Non-goals: Drop-in replacement for CPython: Codon is not a drop-in replacement for CPython. There are some aspects of Python that are not suitable for static compilation — we don't support these in Codon.

This is targeting a Python subset, not Python itself.

For example, something as simple as this will not compile, because lists cannot mix types in Codon (https://docs.exaloop.io/codon/language/collections#strong-ty...):

    l = [1, 's']
It's confusing to call this a "Python compiler" when the constraints it imposes pretty fundamentally change the nature of the language.

wpietri 7 hours ago | root | parent | next |

Yeah, this right here would kill it for me:

> Strings: Codon currently uses ASCII strings unlike Python's unicode strings.

That rules out almost anything web-ish for me.

The use case I could imagine is places where you have a bunch of python programmers who don't really want to learn another language but you have modest amounts of very speed-sensitive work.

E.g., you're a financial trading company who has hired a lot of PhDs with data science experience. In that context, I could imagine saying, "Ok, quants, all of your production code has to work in Codon". It's not like they're programming masters anyhow, and having it be pretty Python-ish will be good enough for them.

Retr0id 7 hours ago | root | parent | next |

>> Strings: Codon currently uses ASCII strings unlike Python's unicode strings.

Yikes. These days I wouldn't even call those strings, just bytes. I can live with static/strong typing (I prefer it, even), but not having support for actual strings is a huge blow.

wpietri 7 hours ago | root | parent | prev |

Ah, looking further, I find this about the company: "Their focus lies in bridging the gap between these two aspects across various domains, with a particular emphasis on life sciences and bioinformatics."

That makes sense as a sales pitch. "Hey, company with a lot of money! Want your nerds to go faster and need less expensive hardware? Pay us for magic speed-ups!" So it's less a product for programmers than it is for executives.

bpshaver a day ago | root | parent | prev | next |

Who is out here mixing types in a list anyway?

dathinab 21 hours ago | root | parent | next |

parsing json is roughly of the type:

type Json = None | bool | float | str | dict[str, Json] | list[Json]

you might have similar situations for configs e.g. float | str for time in seconds or a human readable time string like "30s" etc.

given how fundamental such things are I'm not sure if there will be any larger projects (especially wrt. web servers and similar) which are compatible with this

also many commonly used features for libraries/classes etc. are not very likely to work (but idk. for sure, they just are very dynamic in nature)

so IMHO this seems to be more like a python-like language you can use for idk. some since computations and similar then a general purpose faster python

bpshaver 21 hours ago | root | parent |

Agreed, I was just joking. I understand heterogenous lists are possible with Python, but with the use of static type checking I feel like its pretty rare for me to have heterogenous lists unless its duck typing.

JonChesterfield 17 hours ago | root | parent |

If your language obstructs heterogeneous lists your programs will tend to lack them. Look for classes containing multiple hashtables from the same strings to different object types as a hint that they're missed.

Whether that's a feature is hard to say. Your language stopped you thinking in those terms, and stopped your colleagues from doing so. Did it force clarity of thought or awkward contortions in the implementation? Tends to depend on the domain.

itishappy a day ago | root | parent | prev | next |

The json module returns heterogenous dicts.

https://docs.python.org/3/library/json.html

bpshaver 21 hours ago | root | parent |

Yeah, just because it can do that doesn't mean that it is good design.

gwking 6 hours ago | root | parent | next |

It is the design of JSON! Which is a reflection of the same dynamic typing choice made in the original design of Javascript.

mrguyorama 2 hours ago | root | parent |

They, uh, still aren't wrong hah.

Tell me again why we somehow standardized on sending the equivalent of JSObject.toString() for everything? Especially when "standardized" isn't

gpderetta 7 hours ago | root | parent | prev |

how would you represent an arbitrary JSON array in python then? A potentially heterogeneous list seems the obvious solution.

bpshaver an hour ago | root | parent |

Why would I want to do that? I'm rarely ingesting arbitrary JSON. Rather I'm designing my data structures in a sensible way and then maybe serializing them to JSON. Just because JSON can represent heterogenous lists doesn't mean it is a good idea to use heterogenous lists in my programs.

CaptainNegative a day ago | root | parent | prev | next |

I often find myself mixing Nones into lists containing built-in types when the former would indicate some kind of error. I could wrap them all into a nullable-style type, but why shouldn't the interpreter implicitly handle that for me?

gwking 5 hours ago | root | parent | prev | next |

An example related to JSON content is HTML content. I have a Python library that represents all of the standard HTML tags as a family of classes. It is like a lightweight DOM on the server side, and has resulted in a web server that does not use string based templating at all. It lets me construct trees of HTML completely in Python and then render them out with everything correctly escaped. I can also parse HTML into trees and manipulate them as I please (for e.g. scraping tasks and document transforms). It is all strongly typed using mypy and I adhere to the strictest generic typing I can manage.

Each node has a list of children, and the element type is `str|HtmlNode`. I find this vastly easier to use than the LXML ETree api, where nodes have `text` and `tail` attributes to represent interleaved text.

Interestingly, the LXML docs promote their design as follows: > he two properties .text and .tail are enough to represent any text content in an XML document. This way, the ElementTree API does not require any special text nodes in addition to the Element class, that tend to get in the way fairly often (as you might know from classic DOM APIs). https://lxml.de/tutorial.html#elements-contain-text

It could be a simple matter of taste! But I suspect that the difference between what they are describing as "classic DOM" vs what I am doing is that they are referring to experience with C/C++/Java libraries circa 2009 that had much less convenient dynamic type introspection. The "get in the way fairly often" reminds me of how verbose it is to deal with heterogenous data in C/C++/ObjC. In ObjC for example, you could have an array mixing NSString with other NSObject subclasses, but you had to do work to type it correctly. If you wanted numbers in there you had to use NSNumber which is an annoying box type that you never otherwise use. And ObjC was considered very dynamic in its day!

I have long felt that the root of much evil was the overbearing distinction between primitive and object types in C++/Java/Objective-C.

All of this is a long way of saying, I think "how to deal with heterogenous lists of stuff" is a huge question in language design, library design, and the daily work of programming. Modern languages have by no means converged on a single way to represent varying types of elements. If you want to create trees of stuff, at some level that is "mixing types in a list" no matter how you might try to encode it. Just food for thought!

nicce a day ago | root | parent | prev | next |

Everyone who chooses the Python in the first hand.

bpshaver 21 hours ago | root | parent |

Well, I'm one of those people, and I feel that I rarely do this. Except if I have a list of different objects that implement the same interface, as another commenter mentioned.

RogerL 21 hours ago | root | parent | prev | next |

return [key, value]

Myrmornis 8 hours ago | root | parent | next |

You should use a tuple there: it's a collection of fixed size where each slot has an identity. (There's a common confusion in Python circles that the main point of tuples is immutability; that's not so).

ghxst 18 hours ago | root | parent | prev |

Why would you do this over `return key, value` which produces a tuple? Just curious.

dgan 14 hours ago | root | parent | next |

Not the parent, but i return heterogeneous lists of the same length to the excel to be used by xlwings. The first row being the headers, but every row below is obviously heterogeneous

quotemstr a day ago | root | parent | prev | next |

It's not even a subset. They break foundational contracts of the Python language without technical necessity. For example,

> Dictionaries: Codon's dictionary type does not preserve insertion order, unlike Python's as of 3.6.

That's a gratuitous break. Nothing about preserving insertion order interferes with compilation, AOT or otherwise. The authors of Codon broke dict ordering because they felt like it, not because they had to.

At least Mojo merely claims to be Python-like. Unlike Codon, it doesn't claim to be Python then note in the fine print that it doesn't uphold Python contractual language semantics.

orf a day ago | root | parent | next |

Try not to throw around statements like “they broke duct ordering because they felt like it”.

Obviously they didn’t do that. There are trade-offs when preserving dictionary ordering.

baq 13 hours ago | root | parent | next |

dicts ordering keys in insertion order isn't an implementation detail anymore and hasn't been for years.

nick238 39 minutes ago | root | parent |

I get that all dicts are now effectively an `collections.OrderedDict`, but I've never seen practical code that uses the insertion order. You can't do much with that info (no `.pop()`, can't sort a dict without recreating it) beyond maybe helping readability when you print or serialize it.

dathinab 21 hours ago | root | parent | prev | next |

if you claim

> high-performance Python implementation

then no this aren't trade-offs but breaking the standard without it truly being necessary

most important this will break code in a subtle and potentially very surprising way

they could just claim they are python like and then no one would hold them for not keeping to the standard

but if you are misleading about your product people will find offense even if it isn't intentionally

actionfromafar a day ago | root | parent | prev |

The trade-off is a bit of speed.

cjbillington 21 hours ago | root | parent |

This might be what you meant, but the ordered dicts are faster, no? I believe ordering was initially an implementation detail that arose as part of performance optimisations, and only later declared officially part of the spec.

Someone 14 hours ago | root | parent |

> but the ordered dicts are faster, no?

They may be in the current implementations, but removing an implementation constraint can only increase the solution space, so it cannot make the best implementation slower.

As a trivial example, the current implementation that guarantees iteration happens in insertion order also is a valid implementation for a spec that does not require that guarantee.

adammarples a day ago | root | parent | prev |

Well would you claim that Python 3.5 isn't python?

stoperaticless a day ago | root | parent |

All versions of python are python.

If lang is not compatible with any of python versions, then the lang isn’t python.

False advertising is not nice. (even if the fineprint clarifies)

thesz 14 hours ago | root | parent |

> If lang is not compatible with any of python versions, then the lang isn’t python.

Python versions are not compatible between themselves, as python does not preserve backward compatibility, ergo python is not python.

jjk7 a day ago | root | parent | prev |

The differences seem relatively minor. Your specific example can be worked around by using a tuple; which in most cases does what you want.

itishappy a day ago | root | parent |

Altering python's core datatypes is not what I'd call minor.

They don't even mention the changes to `list`.

> Integers: Codon's int is a 64-bit signed integer, whereas Python's (after version 3) can be arbitrarily large. However Codon does support larger integers via Int[N] where N is the bit width.

> Strings: Codon currently uses ASCII strings unlike Python's unicode strings.

> Dictionaries: Codon's dictionary type does not preserve insertion order, unlike Python's as of 3.6.

> Tuples: Since tuples compile down to structs, tuple lengths must be known at compile time, meaning you can't convert an arbitrarily-sized list to a tuple, for instance.

https://docs.exaloop.io/codon/general/differences

Pretty sure this means the following doesn't work either:

    config = { "name": "John Doe", "age": 32 }
Note: It looks like you can get around this via Python interop, but that further supports the point that this isn't really Python.

dathinab 21 hours ago | root | parent |

> Strings: Codon currently uses ASCII strings unlike Python's unicode strings.

wtf this is a supper big issue making this basically unusable for anything handling text (and potentially even just fixed indents, if you aren't limited to EU+US having non us-ascii idents in code or text is common, i.e. while EU companies most times code in english this is much less likely in Asia, especially China and Japan.

it isn't even really a performance benefit compared to utf-8 as utf-8 only using us-ascii letters _is_ us-ascii and you don't have to use unicode aware string operations

Lucasoato a day ago | prev | next |

> Is Codon free? Codon is and always will be free for non-production use. That means you can use Codon freely for personal, academic, or other non-commercial applications.

I hope it is released under a truly open-source license in the future; this seems like a promising technology. I'm also wondering how it would match C++ performance if it is still garbage collected.

troymc a day ago | root | parent |

The license is the "Business Source License 1.1" [1].

The Business Source License (BSL) 1.1 is a software license created by MariaDB Corporation. It's designed as a middle ground between fully open-source licenses and traditional proprietary software licenses. It's kind of neat because it's a parameteric license, in that you can change some parameters while leaving the text of the license unchanged.

For codon, the "Change Date" is 2028-03-01 and the "Change License" is "Apache License, Version 2.0", meaning that the license will change to Apache2 in March of 2028. Until then, I guess you need to make a deal with Exaloop to use codon in production.

[1] https://github.com/exaloop/codon?tab=License-1-ov-file#readm...

axit a day ago | root | parent |

From what I've seen is the "Change Date" is usually updated so you always have a few years older software as Apache License and the latest software as BSL

actionfromafar a day ago | root | parent | next |

Just to make it clear - the cutoff date on previously released software remains the same. So if you download it now and wait a few years, your software will have matured into its final form, the Apache 2 license.

actionfromafar a day ago | prev | next |

I immediately wonder how it compares to Shedskin¹

I can say one thing - Shedskin compiles to C++, which was very compelling to me for integrating into existing C++ products. Actually another thing too, Shedskin is Open Source under GPLv3. (Like GCC.)

1: https://github.com/shedskin/shedskin/

crorella a day ago | root | parent |

I looks like codon has less restrictions when compared to shed skin.

actionfromafar a day ago | root | parent |

I suppose that's right, I don't think shedskin can call numpy yet, for instance. On the other hand it seems easier to put shedskin on an embedded device, for instance.

amelius 21 hours ago | prev | next |

The challenge is not just to make Python faster, it's to make Python faster __and__ port the ecosystem of Python modules to your new environment.

eigenspace 4 hours ago | root | parent | next |

It’s also just simply not python. It’s a separate language with a confusingly close syntax to python, but quite different semantics.

w10-1 a day ago | prev | next |

Unclear if this has been in the works longer as the graalvm LLVM build of python discussed yesterday[1]. The first HN discussion is from 2022 [3].

Any relation? Any comparisons?

Funny I can't find the license for graalvm python in their docs [2]. That could be a differentiator.

- [1] GraalVM Python on HN https://news.ycombinator.com/item?id=41570708

- [2] GraalVM Python site https://www.graalvm.org/python/

- [3] HN Dec 2022 https://news.ycombinator.com/item?id=33908576

veber-alex a day ago | prev | next |

What's up with their benchmarks[1], it just shows benchmark names and I don't see any numbers or graphs. Tried Safari and Chrome.

[1]: https://exaloop.io/benchmarks/

sdmike1 a day ago | root | parent | next |

The benchmark page looks to be broken, the JS console is showing some 404'd JS libs and a bad function call.

pizlonator 17 hours ago | root | parent | prev |

Also those are some bullshit benchmarks.

It’s not surprising that you can make a static compiler that makes tiny little programs written in a dynamic language into fast executables.

The hard part is making that scale to >=10,000 LoC programs. I dunno which static reasoning approaches codon uses, but all the ones I’m familiar with fall apart when you try to scale to large code.

That’s why JS benchmarking focused on larger and larger programs over time. Even the small programs that JS JIT writers use tend to have a lot of subtle idioms that break static reasoning, to model what happens in larger programs.

If you want to get in the business of making dynamic languages fast then the best advice I can give you is don’t use any of the benchmarks that these folks cite for your perf tuning. If you really do have to start with small programs then something like Richards or deltablue are ok, but you’ll want to diversify to larger programs if you really want to keep it real.

(Source: I was a combatant in the JS perf wars for a decade as a webkitten.)

timwaagh a day ago | prev | next |

It's a really expensive piece of software. They do not publish their prices because of it. I don't think it's reasonable to market such products onto your average dev because of it. Anyhow Cython and a bunch of others provide a free and open source alternative.

shikon7 18 hours ago | prev | next |

From the documentation of the differences with Python:

> Strings: Codon currently uses ASCII strings unlike Python's unicode strings.

That seems really odd to me. Who would use a framework nowadays that doesn't support unicode?

big-chungus4 a day ago | prev | next |

so, assuming I don't get integers bigger than int64, and don't use the order of build in dicts, can I just use arbitrary python code and use it with codon? Can I use external libraries? Numpy, PyTorch? Also noticed that it isn't supported on windows

jitl a day ago | prev | next |

What’s the difference between this and Cython? I think another comment already asks about shedskin.

jay-barronville a day ago | prev |

Instead of building their GPU support atop CUDA/NVIDIA [0], I’m wondering why they didn’t instead go with WebGPU [1] via something like wgpu [2]. Using wgpu, they could offer cross-platform compatibility across several graphics API’s, covering a wide range of hardware including NVIDIA GeForce and Quadro, AMD Radeon, Intel Iris and Arc, ARM Mali, and Apple’s integrated GPU’s.

They note the following [0]:

> The GPU module is under active development. APIs and semantics might change between Codon releases.

The thing is, based on the current syntax and semantics I see, it’ll almost certainly need to change to support non-NVIDIA devices, so I think it might be a better idea to just go with WebGPU compute pipelines sooner rather than later.

Just my two pennies…

[0]: https://docs.exaloop.io/codon/advanced/gpu

[1]: https://www.w3.org/TR/webgpu

[2]: https://wgpu.rs

MadnessASAP 18 hours ago | root | parent |

Well for better or worse CUDA is the GPU programming API. If you're doing high performance GPU workloads you're almost certainly doing it in CUDA.

WebGPU while stating compute is within their design I would imagine is focused on presentation/rendering and probably not on large demanding workloads.