Episode: 3508 Title: HPR3508: Differences between C# and Haskell Source: https://hub.hackerpublicradio.org/ccdn.php?filename=/eps/hpr3508/hpr3508.mp3 Transcribed: 2025-10-25 00:42:31 --- This is Haka Public Radio Episode 35884 when stated 12th of January 2022. Today's show is entitled, Differences Between C-Hush and Haskel and is part of the series, Haskel. It is hosted by Duke or Oto, and is about 29 minutes long, and carries a clean flag. Another summary is, Duke Oto talks about some of the differences between C-Hush and Haskel. Hello, and welcome. This is Tura Tura and you are listening to the Haka Public Radio. Today's episode is about differences between C-Hush and Haskel. So some time ago I was prompted to make an episode about the differences between the Haskel and C-Shop. I am probably going to omit a lot of things accidentally. I write C-Shop as a day job, and tingle with Haskel as a night job. So let's get started. The two languages have pretty different origin. C-Shop is designed to be a practical language for real-world problems. They made the language they were thinking of how people could use this to make useful programs that solve real-world problems. Haskel on the other hand was designed to be a language for programming language research. So at the time when the Haskel was sort of invented, there are researchers who everybody do programming language research, they are writing their own languages and that was hindering the research. They form a committee that designed a language that people could use to do the research so that they are sharing their results of the research and collaboration would be easier. So there's the Haskel and then there's a whole bunch of extensions. So you have the core languages and then you can enable all kinds of various things. It's an adventure in itself to look through all those different extensions and trying to figure out what they do. And sometimes learning new things in the Haskel is like reading a bunch of math and computer science papers because that's what it really is. Okay, so the different main paradigm. So C-shop is mainly object oriented. So they use objects that encapsulate the data and methods that modify the data. They are pi-key-punted together and data and methods are defined usually together. And if your class has an interface, it's defined that the interface is defined somewhere else. But if your class implements that interface, it is specified and defined at the same time as you are implementing rest of your class. So they are really tightly packed together. There's some system-fold extension methods that you can use to add new methods for existing data. But these are not used that much. The sub-tizing is normal. Every object in the C-shop inherits if nothing else than object. And often designers create object hierarchies. The classical example that there's an animal and then there's a dog that is an animal and cat that is an animal. So they form a hierarchy. Haskel is purely functional. So data and functions and data and functions are not tightly packed together. So you define data in one place and functions that modify or use that data in another place. And if you want to have two different types of data that share something that you can perform similar operations like you would like to call a function with the same name over two different types of data, you use type classes to create that. So the type class instances are defined somewhere else, then your data is defined or they can be defined somewhere else, the data is defined. So that the data and the functions are not tightly coupled in a Haskel. The functions are in Haskel world, they always take parameters and return a value and function called the same parameters always returns the same value. And this is true even for the random numbers which is very neat if you're reading of it. So you can consider program as a one huge mathematical equation where you plug in inputs and receive an output. Of course as soon as you start interacting with the real world, then the reality show up. Every time you read from the database you are not going to get the same data of course because the difference on the state of the database what if you consider that the state of the database is a path of the input, then with the same inputs you get the same output. And sub typing doesn't exist in a Haskel, you cannot make a data and build a hierarchy of those instead you use composition to do the same thing. And see how of course you can use composition to and depending on the people writing a program they might actually prefer using composition instead of building a deep object hierarchy. The languages have a different take on the data like in C sharp most of the data is mutable. So if you have a person object and you change the age of the person then you still have the same person object but now the age of that person is different. You can change strings, you can always create new ones, this is done for the performance reasons. So for example if you have some string and you call a two-atle method on it you get a new string as a result so the old string is still the same as it was before and the new string is all letters converted to the application. Haskel on the other hand the most of the data is immutable by default. So if you have that person record you cannot change the age. The person has an age but if you call a function to change it sort of you are going to get a new person record where the age has been changed. The old personal record exists and the new personal record exists and the difference between these is the age. One might think that this leads into a lot of copying of data and huge performance issues but because all the data is immutable by default you can share the data between the different objects and records for example. So in case of this person example the new person record would have all the other values they would be copies of the original, they would be the values of the original, these records would be sharing the data except for the age that is changed. There is a way to write mutable data, there is a iOS and STs and the software transactional memory, they allow mutable data but they are quite rarely used, there is a place where they make sense but most of the time you end up writing your programs with immutable data. And of course you can write immutable data in CSAP2 but that is not the norm, you usually have mutable data there. The limits have quite different execution model, this was something that tricked me plenty of times when I was learning, it is still sometimes, but CSAP is strict language meaning that the values best to a function are evaluated before a function call. So whatever if you have a function in your program and you are calling that function and giving a, how would I explain this, if you are calling a function in your program and parameters to, I know arguments to that function call will be called to the another functions, then those calls to the another functions will be first evaluated and values produced by those calls are given as a parameters, your original function call. In Haskell, Haskell word, the Haskell is non-strict, so values are evaluated when needed. So in our case, the original function would actually be called with something called hung, so that this is just a computation that has not been performed. So only when the value is needed or it's explicitly forced, it is computed. So in essence, you don't necessarily know the exact word of your program is evaluated in Haskell, but that doesn't matter, because the Haskell in itself can figure out what values are needed and when, and because the function calls are always with same parameters returning same results, you can order them however you want. And Haskell even evaluates the values only as much as it is needed or forced. So if you are giving a computation that computes a list of some amount of items as a parameter to a function, and then you're inside of the function checking five first elements of that list, then only those five elements will be computed, rest of the elements are not computed yet, they are just waiting there to be computed if they are ever needed. And this means that you can have infinitely large data structures if you want to. As long as you don't evaluate them completely, like if you write an open bracket 1.35 close bracket, that is a list with numbers from 1 to 5, but if you omit the upper range, you get a list that has all the numbers starting from the 1, 1, 2, 3, 4, and so on. And you can type that around in the code base, you can even say to that list of all numbers that let's multiply all of your values by 2, and then you have a list of all numbers multiplied by the 2, and that's nothing strange in the Haskell. And then at some point when you say that okay I need let's print on screen five first elements of this list, at that point Haskell will evaluate those computations, and you get the five first elements printed on the screen. However, if you say that let's take the, I want to know how many numbers they are, then the Haskell will start evaluating the list, computing the values, counting the numbers and keep keeping going until it runs out of memory. So you have to think a little bit what you are doing, and the another thing is that if you are passing around computations, and those computations are really big, for example, you are doing a one example is that if you are building a long list of numbers and then you are summing those numbers together, then the Haskell will build you a computation that is basically one plus, two plus, three plus, four plus, five plus, it will build the whole computation before evaluating it. And if the list is long, you are going to spend quite a lot of memory to building up that computation. So in Haskell that's term or space leaked, where you have a fault or logical error in your application that causes it to use a huge amount of memory. That's why in some cases you specifically tell the Haskell that hey please enforce this value, enforce this value, I don't want to keep huge computation in memory, I just want the value and then I can pass it around. So there's a whole new set of different kinds of bugs in Haskell world that doesn't really exist in the CSAP world. Type system, this is hard to describe in short, there's probably enough material here to make many episodes, this is really fascinating and this is really complex thing if you start to, at least I see that it's very complex thing if you start reading out all the little details, but I'll try, I'll try, we can cover the very basics. So in the CSAP system, I won't find a name for the CSAP type system, so I'm just saying it's a CSAP type system and CSAP variable has a information like how much space it requires and minimum maximum value it can represent. It members, like methods, fields, events, so on, that it contains, base type, it inherits from, it's an object, if you haven't specified anything else, everything inherits objects. I actually don't know what object inherits I haven't ever looked into that, maybe it's a special case written in the system and it doesn't inherit anything or it inherits itself, I don't know. In the phases it implements and the kinds of operations are not there, and in Haskerver, this is from the Haskerver 2010 report, Haskerver uses traditionally hintler milliner polymorphic type system to provide a static type semantics, but the system has been extended with type classes that provide a structured way to introduce all other functions, that's a mouthful and I don't actually really know what that means. But I know that it's a simple yet expressive system, you have algebraic data types, so that means that you build your data from basically enumerations, not CSAP in order, but basically from types that these values that they can contain, and you can combine those to form a thicker type, and so you will, and then there's a syntactic sugar on top of that, so you can build, for example, three codes out of them. You have type classes for functional overloading, so if you have two sets of two different kinds of records, for example, you have a dog and a cat, and you want to, both of them have to have a, and you want to have a function speak that works for both of dog on the cat, then you have to define a bright class, for example, an animal that has a function speak, and then you have to implement that bright class instance for the dog and cat data, and then you can call the speak function with the dog data and it's a wolf and cat, it's a meow. It's a capable of telling if function has access to input and output, among other things, this is pretty neat, I really really like this, and sometimes I solely missed this in a big CSAP program, because it's a very big advantage when you are writing, for example, some complicated logic, and you can spot immediately if some part of that logic inside of some loop is trying to read data from the database, I have had several nasty performance bugs because of this, and there is no null in Haskell, so you cannot say that, obviously you can say that some object, some record doesn't exist, but that is represented on the type level, you are saying that in a type level that this record may or may not be there, and while you are programming, you have to take that into account, you can just use the data without realizing that this might or might not feel null, in CSAP there are nullable types that are basically the same thing, but they are quite late, so there are many, many, many years of CSAP programs library, frameworks and what not, that don't take advantage of the nullable types, so in the CSAP value always has to be careful when you are handling reference type data, that could this be null or not, but that's just how it is, this is a first thing people usually know this bit in CSAP and the Haskell, in my opinion, CSAP doesn't really matter that much in this case, CSAP uses more parents and has semicolons at the end of the lines, but Haskell has plenty of funny functions, it has bind operator that is often seen, that is less than less than equal, it has the, I think it was up function that has less than after it's created then, so it looks sort of like a Pi factor, and there's a lot of plenty funny functions, how in fixed functions in Haskell, and reading Haskell code when you first see those is tricky, but in my opinion, the syntax in itself isn't that big of a hurdle, the bigger hurdle is that the programs are structured differently, because they, like all these things that I mentioned earlier, this leads into the fact that Haskell program and CSAP programs solve the same problems in a quite different looking way, so in CSAPs you have objects that form a graph that makes up the program, and those objects communicate with each other with messages, either with the network calls, and usually often mutating state here and there, you don't have a centralized place, they all the mutable state is and everything around that will be immutable, you usually see CSAP program, you mutate state everywhere in your program, and you can interact with the outside world from anywhere inside of your program, in Haskell, in Haskell it's not the object that form the graph, but it functions that form a tree that makes up the program, this is the mathematical equation that I compared the Haskell program earlier, and data is passed into functions that return a new data, and the mutable state is confined to a very small portion of a program, usually, of course you can just, you could write a program where you mutate the state everywhere, but that would be, that would be idiomatic Haskell and it would feel pretty clumsy to write programs that way, and interactions with the outside world are done from a very specific locations inside of your program, usually Haskell program tends to look like that you have a big, or some size of call, where there is no mutable state, where there is no interactions with the outside world, that's the logic of your program, that just works like, give it inputs, it gives you output, output, and it's pure logic, pure calculations, pure computations, and interactions with the outside world, reading the keyboard, printing stuff on the screen, doing graphics, calling, over the network, interacting with the data place, dealing with getting a seed for your random number generator, all this stuff is thin crust outside of this pico core, so they solve the same problems with a different approaches, and the both seeds out and Haskell can be written, can be used to write programs that solve the same problems, the solutions, who might look different, but both are channels for better programs, there's a lot of people who use Haskell for writing products and business software, and I imagine that there's a lot of people who use Sheesh Up to do a research, and these languages are borrowing features from each other, I don't know how much Haskell has borrowed from the Sheesh Up, I would be imagining that there's something that it has borrowed from there, and Sheesh Up definitely has borrowed things from Haskell, that's how the program languages evolve anyway, and that's the point of the doing the research, so you get the, well that's one point of the doing the research, that you get the new features and new ideas that you can take into use in other languages if you deem that they are useful to them, and I think it was the Simon Veyton Jones who said that one of the researchers put a lot of work with the Haskell, and said that the Haskell and Sheesh Up started this opposite kind of language, so Sheesh Up was very dirty or pragmatic language that you could use, that you could use to get stuff done, and Sheesh and The Haskell on the other hand was very pure and useless language, in the beginning even printing on the screen didn't exist in The Haskell, I had been told, and slowly they had been moving towards each other, so Haskell has been growing more practical and Sheesh Up has gained some features from the Haskell, and I like both languages, personally I prefer doing my homework, like my hobbies in Haskell, I just enjoy it more, but the Sheesh Up works really well, I don't think that if we are doing what we are doing with the Haskell, that it will be that much fun, any more anyway, because you have to remember that usually when you are coding for the somebody else, somebody else is going to say that I want these kinds of features, I want this thing to work like this, and you have this much of time to do that for me, so there is a whole lot of constraints that doesn't exist when you are coding as a hobby, because as a hobby, you can take as much time as you want to do something tiny little insignificant thing, you can rewrite the whole code base if you feel like it, and nobody is going to say anything about that, there are two different approaches to the programming, and I believe that that might be the part why I like Haskell as a hobby program, because I can take a lot of time to figure out the, for example, type level of Haskell, sometimes the Haskell seems like a sort of like a puzzle to me like this is a problem, and then I am just trying to find the pieces that fit together and build a beautiful picture out of that stuff, but that is just because I have plenty of time to do that at home, anyway I am starting to remember here, that is the episode, so thanks for listening, if you have any questions or comments, you can reach me via email or status there on Tutur.deck.lgbd, or even that you could record your own Haskell Public Radio episode, Ad Astra. You've been listening to Hecker Public Radio at Hecker Public Radio.org. Today's show was contributed by an HBR listener like yourself, if you ever thought of recording a podcast, then click on our contribute link to find out how easy it really is. Hosting for HBR is kindly provided by an honesthost.com, the internet archive and our sync.net, unless otherwise stated, today's show is released under Creative Commons,