(log configuration identifier level format-string arg0 arg1 ... argN)
After each partial application step you can do more and more work narrowing the scope of what you return from subsequent functions. ;; Preprocessing the configuration is possible
;; Imagine all logging is turned off, now you can return a noop
(partial log conf)
;; You can look up the identifier in the configuration to determine what the logger function should look like
(partial log conf id)
;; You could return a noop function if the level is not enabled for the particular id
(partial log config id level)
;; Pre-parsing the format string is now possible
(partial log conf id level "%time - %id")
In many codebases I've seen a large amount of code is literally just to emulate this process with multiple classes, where you're performing work and then caching it somewhere. In simpler cases you can consolidate all of that in a function call and use partial application. Without some heroic work by the compiler you simply cannot do that in an imperative style.With the most successful functional programing language Excel, the dataflow is fully exposed. Which makes it easy.
Certain functional programming languages prefer the passing of just one data-item from one function to the next. One parameter in and one parameter out. And for this to work with more values, it needs to use functions as an output. It is unnecessary cognitive burden. And APL programmers would love it.
Let's make an apple pie as an example. You give the apple and butter and flour to the cook. The cursed curry version would be "use knife for cutting, add cutting board, add apple, stand near table, use hand. Bowl, add table, put, flour, mix, cut, knife butter, mixer, put, press, shape, cut_apple." etc..
https://jonathanwarden.com/implicit-currying-and-folded-appl...
(Side note: if you're reading this Roc devs, could you add a table of contents?)
1) "performance is a bit of a concern"
2) "curried function types have a weird shape"
2 is followed by single example of how it doesn't work the way the author would expect it to in Haskell.It's not a strong case in my opinion. Dismissed.
> I'd also love to hear if you know any (dis)advantages of curried functions other than the ones mentioned.
I think it fundamentally boils down to the curried style being _implicit_ partial application, whereas a syntax for partial application is _explicit_. And as if often the case, being explicit is clearer. If you see something like
let f = foobinade a b
in a curried language then you don't immediately know if `f` is the result of foobinading `a` and `b` or if `f` is `foobinade` partially applied to some of its arguments. Without currying you'd either write let f = foobinade(a, b)
or let f = foobinade(a, b, $) // (using the syntax in the blog post)
and now it's immediately explicitly clear which of the two cases we're in.This clarity not only helps humans, it also help compilers give better error messages. In a curried languages, if a function is mistakenly applied to too few arguments then the compiler can't always immediately detect the error. For instance, if `foobinate` takes 3 arguments, then `let f = foobinade a b` doesn't give rise to any errors, whereas a compiler can immediately detect the error in `let f = foobinade(a, b)`.
A syntax for partial application offers the same practical benefits of currying without the downsides (albeit loosing some of the theoretical simplicity).
1. Looking at a function call, you can't tell if it's returning data, or a function from some unknown number of arguments to data, without carefully examining both its declaration and its call site
2. Writing a function call, you can accidentally get a function rather than data if you leave off an argument; coupled with pervasive type inference, this can lead to some really tiresome compiler errors
3. Functions which return functions look just like functions which take more arguments and return data (card-carrying functional programmers might argue these are really the same thing, but semantically, they aren't at all - in what sense is make_string_comparator_for_locale "really" a function which takes a locale and a string and returns a function from string to ordering?)
3a. Because of point 3, our codebase has a trivial wrapper to put round functions when your function actually returns a function (so make_string_comparator_for_locale has type like Locale -> Function<string -> string -> order>), so now if you actually want to return a function, there's boilerplate at the return and call sites that wouldn't be there in a less 'concise' language!
I think programming languages have a tendency to pick up cute features that give you a little dopamine kick when you use them, but that aren't actually good for the health of a substantial codebase. I think academic and hobby languages, and so functional languages, are particularly prone to this. I think implicit currying is one of these features.
Functions can be done explicitly written to do this or it can be achieved through compiler optimisation.
Using curried OR tuple arg lists requires remembering the name of an argument by its position. This saves room on the screen but is mental overhead.
The fact is that arguments do always have names anyway and you always have to know what they are.
There is one situation, however, where Standard ML prefers currying: higher-order functions. To take one example, the type signature of `map` (for mapping over lists) is `val map : ('a -> 'b) -> 'a list -> 'b list`. Because the signature is given in this way, one can "stage" the higher-order function argument and represent the function "increment all elements in the list" as `map (fn n => n + 1)`.
That being said, because of the value restriction [0], currying is less powerful because variables defined using partial application cannot be used polymorphically.
def add(x: int, y: int) -> int { return x + y; }
def add3 = add(_, 3);
Or more simply, reusing some built-in functions: def add3 = int.+(_, 3);The "hole" syntax for partial application with dollar signs is a really creative alternative that seems much nicer. Does anyone know of any languages that actually do it that way? I'd love to try it out and see if it's actually nicer in practice.
In the sense that "make_string_comparator" is not a useful concept. Being able to make a "string comparator" is inherently a function of being able to compare strings, and carving out a bespoke concept for some variation of this universal idea adds complexity that is neither necessary nor particularly useful. At the extreme, that's how you end up with Enterprise-style OO codebases full of useless nouns like "FooAdapter" and "BarFactory".
The alternative is to have a consistent, systematic way to turn verbs into nouns. In English we have gerunds. I don't have to say "the sport where you ski" and "the activity where you write", I can just say "skiing" and "writing". In functional programming we have lambdas. On top of that, curried functions are just a sort of convenient contraction to make the common case smoother. And hey, maybe the contraction isn't worth the learning curve or usability edge-cases, but the function it's serving is still important!
> Because of point 3, our codebase has a trivial wrapper to put round functions when your function actually returns a function
That seems either completely self-inflicted, or a limitation of whatever language you're using. I've worked on a number of codebases in Haskell, OCaml and a couple of Lisps, and I have never seen or wanted anything remotely like this.
In that case I want the signature of "this function pre-computes, then returns another function" and "this function takes two arguments" to be different, to show intent.
> achieved through compiler optimisation
Haskell is different in that its evaluation ordering allows this. But in strict evaluation languages, this is much harder, or even forbidden by language semantics.
Here's what Yaron Minsky (an OCaml guy) has to say:
> starting from scratch, I’d avoid partial application as the default way of building multi-argument functions.
https://discuss.ocaml.org/t/reason-general-function-syntax-d...
And yeah I think this is the way to go. For higher-order functions like map it feels too elegant not to write it in a curried style.
And yes, another comment mentioned that Scala supports this syntax!
I'm failing to see how they're not isomorphic.
It's also a question of whether this is exclusive to a curried definition or if such an optimization may also apply to partial application with a special operator like in the article. I think it could, but the compiler might need to do some extra work?
> This feature does have some limitations, for instance when we have multiple nested function calls, but in those cases an explicit lambda expression is always still possible.
I've also complained about that a while ago https://news.ycombinator.com/item?id=35707689
---
The solution is to delimit the level of expression the underscore (or dollar sign suggested in the article) belongs to. In Kotlin they use braces and `it`.
{ add(it, 3) } // Kotiln
add(_, 3) // Scala
Then modifying the "hole in the expression" is easy. Suppose we want to subtract the first argument by 2 before passing that to `add`: { add(subtract(it, 2), 3) } // Kotlin
// add(subtract(_, 2), 3) // no, this means adding 3 to the function `add(subtract(_, 2)`
x => { add(subtract(x, 2), 3) } // ScalaThe distinction is mostly semantic so you could say they are the same. But I thought it makes sense to emphasize that the former is a feature of function types, and the latter is still technically single-parameter.
I suppose one real difference is that you cannot feed a tuple into a parameter list function. Like:
fn do_something(name: &str, age: u32) { ... }
let person = ("Alice", 40);
do_something(person); // doesn't compile
They are not equally easy for me to use when I'm writing a program. So from a software engineering perspective, they are very much not the same.
foldr f z = go
where
go [] = z
go (x : xs) = f x (go xs)
when called with (+) and 0 can be inlined togo xs = case xs of
[] -> 0
(x : xs) = x + go xs
which doesn't have to create a closure to pass around the function and zero value, and can subsequently inline (+), etc.However, as the article outlines, there are differences (both positive and negative) to using functions with these types. Curried functions allow for partial application, leading to elegant definitions, e.g., in Haskell, we can define a function that sums over lists as sum = foldl (+) 0 where we leave out foldl's final list argument, giving us a function expecting a list that performs the behavior we expect. However, this style of programming can lead to weird games and unweildy code because of the positional nature of curried functions, e.g., having to use function combinators such as Haskell's flip function (with type (A -> B -> C) -> B -> A -> C) to juggle arguments you do not want to fill to the end of the parameter list.
I think you are focusing on the theoretical aspect of partial application and missing the actual argument of the article which having it be the default, implicit way of defining and calling functions isn't a good programming interface.
getClosest :: Set Point -> Point -> Point
You could imagine getClosest build a quadtree internally and that tree wouldn't depend on the second argument. I say slightly contrived because I would probably prefer to make the tree explicit if this was important.
Another example would be if you were wrapping a C-library but were exposing a pure interface. Say you had to create some object and lock a mutex for the first argument but the second was safe. If this was a function intended to be passed to higher-order functions then you might avoid a lot of unnecessary lock contention.
You may be able to achieve something like this with optimisations of your explicit syntax, but argument order is relevant for this. I don't immediately see how it would be achieved without compiling a function for every permutation of the arguments.
fun x => add(subtract(x, 2), 3) // VirgilThe article draws a three way distinction between curried style (à la Haskell), tuples and parameter list.
I'm talking about the distinction it claims exists between the latter two.
The flip side of your example is that people see a function signature like getClosest, and think it's fine to call it many times with a set and a point, and now you're building a fresh quadtree on each call. Making the staging explicit steers them away from this.
I was imagining you might achieve this optimization by inlining the function. So if you have
getClosest(points, p) = findInTree(buildTree(points), p)
And call it like myPoints = [...]
map (getClosest(myPoints, $)) myPoints
Then the compiler might unfold the definition of getClosest and give you map (\p -> findInTree(buildTree(myPoints), p)) myPoints
Where it then notices the first part does not depend on p, and rewrite this to let tree = buildTree(myPoints) in map (\p -> findInTree(tree, p)) myPoints
Again, pretty contrived example. But maybe it could work.I wrote a non-trivial lambda program [1] which enumerates proofs in the Calculus of Constructions to demonstrate [2] that BBλ(1850) > Loader's Number.
[1] https://github.com/tromp/AIT/blob/master/fast_growing_and_co...
[2] https://codegolf.stackexchange.com/questions/176966/golf-a-n...
Irrespective of currying, this is a really interesting point - that the structure of an API should reflect its runtime resource requirements.
I don't believe inlining can take you to the exact same place though. Thinking about explicit INLINE pragmas, I envision that if you were to implement your partial function application sugar you would have to decide whether the output of your sugar is marked INLINE and either way you choose would be a compromise, right? The compromise with Haskell and curried functions today is that the programmer has to consider the order of arguments, it only works in one direction but on the other hand the optimisation is very dependable.
'Putting things' (multi-argument function calls, in this case) 'in-band doesn't make them go away, but it does successfully hide them from your tooling', part 422.
From a theoretical perspective, a tuple expresses the idea of "many things" and a multi-argument parameter list expresses the idea of both "many things" and "function arguments." Thus, from a cleanliness perspective for your definitions, you may want to separate the two, i.e., require function have exactly one argument and then pass a tuple when multiple arguments are required. This theoretical cleanliness does result in concrete gains: writing down a formalism for single-argument functions is decidedly cleaner (in my opinion) than multi-argument functions and implementing a basic interpreter off of this formalism is, subsequently, easier.
From a systems perspective, there is a clear downside in this space. If tuples exist on the heap (as they do for most functional languages), you induce a heap allocation when you want to pass multiple arguments! This pitfall is evident with the semi-common beginner's mistake with OCaml algebraic datatype definitions where the programmer inadvertently wraps the constructor type with parentheses, thereby specifying a constructor of one-argument that is a tuple instead of a multi-argument constructor (see https://stackoverflow.com/questions/67079629/is-a-multiple-a... for more details).
let result =
input
|> foobinade a b
|> barbalyze c d
Or, if we really want to name our partial function before applying it, we can use the >> operator instead: let f = foobinade a b >> barbalyze c d
let result = f input
Requiring an explicit "hole" for this defeats the purpose: let f = barbalyze(c, d, foobinade(a, b, $))
let result = f(input)
Or, just as bad, you could give up on partial function application entirely and go with: let result = barbalyze(c, d, foobinade(a, b, input))
Either way, I hope that gives everyone the same "ick" it gives me.Maybe there could be a rule that parameters have to be named only if their type doesn’t already disambiguate them and if there isn’t some concordance between the naming in the argument expression and the parameter, or something along those lines. But the ergonomics of that might be annoying as well.
args = (a, b, c)
f args
…and that will have the effect of binding a, b, and c as arguments in the called function.In fact many “scripting” languages, like Javascript and Python, support something close to this using their array type. If you squint, you can see them as languages whose functions take a single argument that is equivalent to an array. At an internal implementation level this equivalence can be messy, though.
Lower level languages like C and Rust tend not to support this.
Then there's an implication of 'sure, but that doesn't actually help much if it's not standar' and then it's not addressed further.
The article is about programmer ergonomics of a language. Two languages can have substantially different ergonomics even when there is a straightforward mapping between the two.
let warn_user ~message = ... (* the ~ makes this a named parameter *)
let error = "fatal error!!" in
warn_user ~message:error; (* different names, have to specify both *)
let message = "fatal error!!" in
warn_user ~message; (* same names, elided *)
The elision doesn't always kick in, because sometimes you want the variable to have a different name, but in practice it kicks in a lot, and makes a real difference. In a way, cases when it doesn't kick in are also telling you something, because you're crossing some sort of context boundary where some value is called different things on either side. let result = (barbalyze(c, d, $) . foobinade(a, b, $)) input
Or if you prefer left-to-right: let result = input
|> foobinade(a, b, $)
|> barbalyze(c, d, $)
Maybe what isn't clear is that this hole operator would bind to the innermost function call, not the whole statement.When someone writes
f = foobinade a b
g = foobinadd c d
there is no confusion to the compiler. The problem is the reader. Unless you have the signatures of foobinade and foobinadd memorized, you have no way to tell that f is a curried function and g is an actual result.Whereas with explicit syntax, the parentheses say what the author thinks they're doing, and the compiler will yell at them if they get it wrong.
let foos = foobinate(a, b, input)
let bars = barbakize(c, d, foos)
Other languages have method call syntax, which allows some chaining in a way that works well with autocomplete. let result = input
|> add_prefix_and_suffix("They said '", $, "'!")It can, or it can't; depending on the situation. Sometimes it just adds weight to the mental model (because now there's another variable in scope).
Yes, but the exact FP idea here is that this distinction is meaningless; that curried functions are "actual results". Or rather, you never have a result that isn't a function; `0` and `lambda: 0` (in Python syntax) are the same thing.
It does, of course, turn out that for many people this isn't a natural way of thinking about things.
Everyone knows that. At least everyone who would click a post titled "A case against currying." The article's author clearly knows that too.
That's not the point. The point is that this distinction is very meaningful in practice, as many functions are only meant to be used in one way. It's extremely rare that you need to (printf "%d %d" foo). The extra freedom provided by currying is useful, but it should be opt-in.
Just because two things are fundamentally equivalent, it doesn't mean it's useless to distinguish them. Mathematics is the art of giving the same name to different things; and engineering is the art of giving different names to the same thing depending on the context.
Curried functions are probably one of the first new things you come across if you go from an imperative language to a functional language. In purely functional languages, the convention is to define an n-parameter function inductively by staggering the parameters: applying the function to argument #1 returns a function that takes parameters 2..n, which in turn can be applied to argument #2 to return a function that takes parameters 3..n, etc. until all the arguments are given and the result is returned. For instance, we can define a 3-parameter add function that adds three numbers:
add x y z = x + y + z -- this is syntactic sugar for: add = \x -> (\y -> (\z -> x + y + z)) -- add has the following type: add :: Int -> (Int -> (Int -> Int)) -- you can apply left-to-right: ((add 1) 2) 3 -- returns 6
We make the arrow -> right-associative, so we can write Int -> Int -> Int -> Int. Additionally, we make function application left-associative, so we can write add 1 2 3 and minimize our use of parentheses.
I want to argue that when you define functions in this style, although it is nice and elegant, there are also some things that are lost.
There are roughly three different "styles" that programming languages offer for writing multi-parameter function definitions with. Firstly, there is the imperative style which I will call the "parameter list" style. Here, multiple parameters is a baked-in feature of functions. This is the default in imperative languages like Rust:
fn f(p1: P1, p2: P2, p3: P3) -> R { ... } // definition f(a1, a2, a3) // call f : fn(P1, P2, P3) -> R // type
Another form is the "curried" style, offered in pure functional languages like Haskell:
f p1 p2 p3 = ... -- definition f a1 a2 a3 -- call f :: P1 -> P2 -> P3 -> R -- type
Finally, there is the "tuple" style. It looks similar to the parameter list style, but the multiple parameters are not part of the functions itself. The function just has one parameter, but that parameter is a tuple so it effectively carries multiple values. This is usually possible in functional languages like Haskell as well but not the standard:
f(p1, p2, p3) = ... -- definition f(a1, a2, a3) -- call f :: (P1, P2, P3) -> R -- type
Note that some imperative languages also offer these functional styles, but it gets a bit unwieldy. For instance, we can do the curried style in JavaScript:
const f = p1 => p2 => p3 => ...; f(a1)(a2)(a3)
We can also do the tuple style in Rust:
fn f((p1, p2, p3): (P1, P2, P3)) -> R { ... } f((a1, a2, a3)) f : fn((P1, P2, P3)) -> R
Ultimately, even though these styles have different practical use cases, they are equivalent in theory: the types (P1, P2) -> R and P1 -> P2 -> R are isomorphic, which means there is a one-to-one mapping between functions of those types. So, what's the reason for preferring the currying style to the others?
When you ask the internet why we do curried functions, the main response is "because it makes partial application straightforward". Partial application is a mechanism where you fix one of the parameters of a multi-argument function to a certain value, and get a new function that only takes the rest of the parameters as input. It is true that partial application is very natural and elegant for the currying style; for instance, if we take our 3-parameter add from before, you can do the following in Haskell:
add' = add 1 -- now, add' = \y -> (\z -> 1 + y + z) add' :: Int -> Int -> Int add'' = add' 2 -- now, add'' = \z -> 1 + 2 + z add'' :: Int -> Int add'' 3 -- returns 6
This is especially nice when we have higher-level functions like map and fold. For instance, we can do pretty crazy cool things with partial applications and function composition:
length = foldr (+) 0 . map (const 1) length2d = foldr (+) 0 . map length length2d [[1, 4, 2], [], [7, 13]] -- returns 5
However, it is often wrongly assumed that the option of doing partial application is a special property of curried functions! We can totally do partial application for functions in the parameter list style or tuple style. If we redefine add in the tuple style, we get something like the following:
add(x, y, z) = x + y + z add :: (Int, Int, Int) -> Int add' = let x = 1 in \(y, z) -> add(x, y, z) add'' = let y = 2 in \z -> add'(y, z) add''(3) -- returns 6
"But," I hear you say, "this clearly looks horrible." Or maybe not, I don't want to put words in your mouth. But we can easily define a bit of syntactic sugar to make it look nice, for instance by defining a $ "hole operator":
add' = add(1, $, $) add'' = add'(2, $) add''(3) -- returns 6
In my opinion, this is actually a bit more readable. The more complicated example now looks as follows:
length = foldr((+), 0, $) . map(const(1), $) length2d = foldr((+), 0, $) . map(length, $) length2d([[1, 4, 2], [], [7, 13]]) -- returns 5
I actually find this form more clear. It kinds of shows the "flow" of the data that you feed into length or length2d: first into the second parameter of map, then the result of that is fed into the third parameter of foldr.
Additionally, such a feature allows partial applications of not just the first parameter, which does not work by default for the curried style. For instance if we want to fix the second parameter of map:
-- this is all colors. ever made. allColors = ["red", "green", "blue"] forEachColor = map($, allColors)
This feature does have some limitations, for instance when we have multiple nested function calls, but in those cases an explicit lambda expression is always still possible.
So, despite its elegance, the curried function style doesn't really make partial application more powerful; with a little bit of syntactic sugar we can easily emulate its powers. However, I suspect there is another, more "vibe-based" reason why functional programmers have a dependence on curried functions stronger than water or sleep.
When you first learn functional programming and finally understand curried function types, it's like you're looking inside the matrix. It's so cool that when you make -> right-associative and function application left-associative, successively applying a curried function to multiple parameters unfolds beautifully! We don't even have to write parentheses when doing function application. And we get partial application completely for free, so we can write convoluted definitions for calculating the length of a 2D list and feel very clever.
Even more beautifully, it's essentially an inductive "shape", which reflects the dichotomy between imperative languages and functional languages: they've got their parameter lists, which are more like arrays (iterative), but we have curried functions, which are more like lists (inductive). It makes me feel all warm and fuzzy inside just thinking about the elegance! This is clearly meant to be!
Just because you can, doesn't mean you should. There are some good reasons to prefer the tuple style.
First of all, performance is a bit of a concern. When you call a curried function like add 2 3, the add 2 first evaluates to a new function expression \y -> add 2 y, which is then applied to 3. Every call to a multi-parameter function creates a bunch of intermediate functions. However, I'm sure a good enough optimizer can eliminate that overhead, so this is not really the greatest concern.
More importantly, curried function types have a weird shape. The whole idea about functions is that they take an input and give back an output, so they have types like In -> Out. When you unify this with a curried function type like P1 -> P2 -> P3 -> R, you get In = P1 and Out = P2 -> P3 -> R. When you unify it with a tupled function type like (P1, P2, P3) -> R, you get In = (P1, P2, P3) and Out = R, which seems more logical.
A consequence of this weird shape is a kind of asymmetry: if a function returns multiple outputs, it returns a tuple, but if it takes multiple inputs, you instead stagger them. As a result, curried functions often don't compose nicely.
sayHi name age = "Hi I'm " ++ name ++ " and I'm " ++ show age people = [("Alice", 70), ("Bob", 30), ("Charlotte", 40)] -- ERROR: sayHi is String -> Int -> String, a person is (String, Int) conversation = intercalate "\n" (map sayHi people)
The real issue here is that map is very generic and expects a function of the form In -> Out but curried functions don't have this shape. To make this work we have to instead pass uncurry sayHi as our mapping function. In this 2-parameter example that isn't so bad, but it gets worse with more parameters.
I personally faced another consequence while doing a project involving predicates that said something about monad-returning functions in the Rocq proof assistant. The predicate looked something like the following (parameters in {} are implicit type parameters):
Definition P {In Out : Type} (f : In -> State Out) := ...some proposition about f...
If you define a function the "recommended" way, in a curried style, you get something like f : P1 -> ... -> Pn -> State R. This does not unify with In -> State Out at all, so P(f) is not even well-typed! I had to manually uncurry the function every time. :(
I realize that functional languages are not suddenly going to switch to the tuple style all of a sudden; millions of lines of Haskell have been written and curried functions are just how it's done right now. Nobody is going to change all of that because someone on the internet thought it would be a good idea :). However, if you ever happen to be writing a functional language or standard library, consider experimenting with the tuple style and an alternative syntax for partial application.
Don't take the article too seriously or as an absolute judgment that the tupled style is always better. I admit that for some higher-order functions, like map and fix, a curried definition just seems too nice to resist. I just think for most purposes the tuple style makes a lot more sense.
I'd also love to hear if you know any (dis)advantages of curried functions other than the ones mentioned. I've written a fair bit of functional code, but I'm by no means an expert on the topic and would always love to learn more. And it's possible I just wrote this article because I was salty that I kept having to uncurry functions during my proof assistant projects.
I did want to mention one case where the curried style is superior, although it is not very common. If you live in a dependently typed language like Gallina (the language of the Rocq/Coq proof assistant) or Agda, you can have the return type of a function depend on one of the inputs (not one of the input types, one of the input values), or you can have the type of the second parameter of a function depend on the value of the first. For instance, a common example of a dependent type is a natural number that is guaranteed to be upper-bounded by another natural number n:
(* an instance of [fin n] is a pair of a number and a proof that x < n *) Definition fin (n : nat) := { x : nat & x < n }. Definition plus1 (n : nat) (i : fin n) : fin (n + 1) := (i.1 + 1 ; (* proof that i.1 + 1 < n + 1 *)).
Here, plus1 has type forall n : nat, fin n -> fin (n + 1), which is a curried dependent function type with two parameters: n and i. The forall specifies that other parameters and the return type may depend on the value of the first parameter n. Say you had a version of this in the tupled style. Then the parameters would have to be a dependent tuple, where the type of the second tuple element may depend on the value of the first tuple element:
(* hypothetical syntax *) Definition plus1 (n : nat ; i : fin n) : fin (n + 1) := (i.1 + 1 ; (* proof that i.1 + 1 < n + 1 *)).
This would have a type like forall arg : { n : nat & fin n }, fin (arg.1). Not the worst thing in the world, but I would say definitely more unwieldy.
Dependent types are famously immensely difficult to work with and make type inference undecidable, so even when you're working in a language with this capability, it's best to avoid dependent types in function parameters. Having said that, some people swear by them [1] and manage to write large codebases that make extensive use of them.
[1] The introduction chapter of the Cpdt book. Citation for the book:
Adam Chlipala. Certified Programming with Dependent Types. URL: http://adam.chlipala.net/cpdt/html/toc.html
Also on MIT Press.
Copyright emilia-h 2025-2026 licensed under CC BY-SA 4.0
In a pure language like Haskell, 0-ary functions <==> constants
Not when a language embraces currying fully and then you find that it’s used all the fucking time.
It’s really simple as that: a language makes the currying syntax easy, and programmers use it all the time; a language disallows currying or makes the currying syntax unwieldy, and programmers avoid it.
let f :: Int = foobinade a b
And the compiler immediately tells you that you are wrong: your type annotation does not unify with compiler’s inferred type.And if you think this is verbose, well many traditional imperative languages like C have no type deduction and you will need to provide a type for every variable anyways.
What you say is true. And it works, if you're the author and are having trouble keeping it all straight. It doesn't work if the author didn't do it and you are the reader, though.
And that's the more common case, for two reasons. First, code is read more often than it's written. Second, when you're the author, you probably already have it in your head how many parameters foobinade takes when you call it, but when you're the reader, you have to go consult the definition to find out.
But if I was willing to do it, I could go through and annotate the variables like that, and have the compiler tell me everything I got wrong. It would be tedious, but I could do it.