[00:32:11] * Sonderblade [Sonderblade!~bjourne@h-52-183.A157.priv.bahnhof.se] has quit (Ping timeout: 255 seconds). [00:50:20] * jtimon [jtimon!~quassel@117.29.134.37.dynamic.jazztel.es] has quit (Ping timeout: 240 seconds). [02:48:00] * kernelj [kernelj!~kernelj@unaffiliated/colonelj] has joined the channel. [02:50:28] hey I noticed that in a stack language with quotations id (take something off the stack then put it back) and call (open a quotation) are different things but in a functional like haskell you'd have id x = x and call f = f so they're equivalent there [02:52:15] given a fixed point combinator like in Factor : fix ( f -- ) [| f | [ f fix ] f call ] call ; inline [02:53:58] "[ call ] fix" goes on forever whereas "[ id ] fix" gets stuck at "[ [ id ] fix ]" [02:54:12] Does anyone know anything more about this distinction/similarities? [02:57:53] N.B. it is the "[ call ] fix" that exhibits any type like "fix id" (or "fix call") in Haskell does [02:59:40] I'm still trying to design my own language and I came up with a type for [fix] of [#([#(t) t#]) t# `@t] in my crazy language [03:03:32] roughly translates: for all types t, consume a function from the stack that takes a function of type t, and has the effect of calling t, then after taking the function have the effect of calling t [03:25:03] the function matching is intensional equality I guess [03:30:59] might be interesting to try it out in Cat to see what type it gives fix, except the website is not working so I can't get to it :( [03:35:00] hmm it's on code project [03:40:18] * otoburb [otoburb!~otoburb@unaffiliated/otoburb] has quit (Quit: leaving). [04:45:21] hm this seems like an old version of Cat or something [04:53:13] ah https://code.google.com/archive/p/cat-language [05:03:07] yeah and the new version is wildly different -.- [05:09:28] assuming it is correct I got a type of ('A ('A ('B -> 'C) -> 'D) -> 'D) from Cat for define fix(f) { [ f fix ] f apply } [05:11:36] and "[apply] fix" goes into a loop as you'd expect [05:22:24] but it has a type of ('A -> 'B) which I think is saying any type [05:38:16] yeah and I think the ('B -> 'C) would be ('A -> 'D) in normal usage but since it might not use it it doesn't matter? [06:05:24] * FreeFull [FreeFull!~freefull@defocus/sausage-lover] has quit (Ping timeout: 260 seconds). [09:30:21] * jtimon [jtimon!~quassel@117.29.134.37.dynamic.jazztel.es] has joined the channel. [10:42:11] * rudha [rudha!~rudha@p200300760E7BD9F2B590D31D7F016BAA.dip0.t-ipconnect.de] has joined the channel. [12:15:29] kernelj: If you're interested in Cat, you might be interested in Kitten: http://kittenlang.org [12:17:06] Quotations are not functions, they need to be unboxed/opened to be applied. [12:22:33] They are somewhat like Haskell's arrows. And like arrows, they are more useful if the interface is more restricted. I don't like 'call', because it doesn't have a fixed arity. [12:24:14] You might also be interested in the language I'm working on, Popr: https://github.com/HackerFoo/poprc [13:16:21] erg: I have *heavy* experience with MS SQL. [13:16:44] In a theoretical universe, I'd definitely encourage you to write native bindings in Factor. The wire protocol has been stable for *years*. [13:17:15] That said, just binding to ODBC (or ADO proper if you're bored) would be a ton faster, and have the side-effect of giving Factor native access to all the other SQL databases on Unix and Windows that we don't want to write custom drivers for. [13:17:26] (Dialectical differences will still persist, obviously, but you'd avoid writing the driver proper.) [13:23:24] hackerfoo: interesting your | alternative paths thing. I have something similar in my language [13:25:29] kernelj: What is your language? [13:25:43] it's called starpial but I don't really have much online about it [13:28:06] hackerfoo: here's a sample http://dpaste.com/2984BB2 [13:29:43] I haven't wrote any compiler/interpreter yet still :S Still in design phase [13:30:51] I see [13:31:21] What would alternatives do in your language? Are they the same as in Popr? [13:31:33] they're used for unification [13:31:52] so you can do logic programming [13:33:26] How would unification work? [13:34:20] when you pattern match on something it restricts the number of possible values [13:34:48] I have `@ which leaves a value that is completely undefined [13:35:01] like the variables in Prolog [13:35:18] I'm not totally sure if it works out so far [13:39:02] hm I have to go now but if you could leave any description of the relationship between functions/quotations/arrows that might be useful [13:39:46] I see. You might be interested in miniKanren, which was one of the inspirations for my language: http://minikanren.org [13:40:04] I'm still not happy with the type Cat gave for 'fix' neither compared to what Haskell gives (t -> t) -> t [13:41:19] hm thanks that looks interesting, anyway I'll be back latre [15:01:04] bmp: is the TDS protocol right? or a different one [15:09:03] erg: Yeah. [15:11:16] erg: Do note that there's a lot more than that, but it's possible to support just subsets. https://msdn.microsoft.com/en-us/library/ee209073(v=sql.105).aspx has the most recent TDS spec. [15:11:23] (Near the bottom.) [15:13:34] What do you need/want this for? [15:17:57] for talking to a large sql-server db [15:57:04] * FreeFull [FreeFull!~freefull@defocus/sausage-lover] has joined the channel. [16:08:41] * otoburb [otoburb!~otoburb@unaffiliated/otoburb] has joined the channel. [18:29:27] I still feel that the type of ('A ('A ('B -> 'C) -> 'D) -> 'D) for fix is wrong [18:31:42] or maybe it's just more general [18:36:48] I suppose I should try writing it point free and then I can see the derivation [18:40:26] What does the Cat version of fix do? That type doesn't make any sense. [18:41:05] I haven't tested it actually [18:41:18] I'll do that first... [18:45:49] ok so I've got [over 0 == [pop2 1] [over 1 - swap apply *] if] [18:46:12] type is ('A int ('A int any -> 'A 'b 'c) -> 'A any) [18:46:46] 5 thatquotation fix ----> 120 [18:47:00] so it is working fine as a fixed point combinator hackerfoo [18:48:36] it gives the type of [5 thatquotation fix] to be ('A -> 'A any) [18:49:35] it's not clear why the 'A s are there [18:49:44] or why that is any and not int [18:50:43] I think 'A is a row variable, the type of the rest of the stack. [18:51:03] yeah but couldn't it just be omitted? [18:51:46] 120 has a type of (-> int) and there's no reason for it to be different [18:52:41] It's probably a bug in the type inference. [18:53:05] What happens when you try something like "1 2 3 4 5 [pop] fix"? What is the type? [18:53:40] gives ('A -> 'A int int int int int) [18:55:40] to be fair the type of [1 2] is ('A -> 'A int int) [18:56:37] yeah it automatically adds in those row variables before it even does anything [18:57:18] I guess it should be something like "[nip apply] fix". I'm trying to eat the entire stack. [18:58:04] there's no nip word, is that the same as [pop] dip? [18:59:43] I'll go with yes [19:00:50] for [1 2 3 4 5 [nip apply] fix] it gives type of ('A -> 'B) [19:01:17] same type as you get for just [apply] fix [19:01:57] really they're the same non-terminating functions [19:02:10] one of them spins forever, one eats the stack and explodes [19:02:46] that's quite a useful way of clearing the stack though thanks ;) [19:04:12] too bad there isn't a definitive vocabulary of stack words [19:04:42] even in my own language I dispensed of drop and pop as for dealing with stack objects (lists) [19:04:58] oh right yeah I was going to rewrite fix point free [19:06:56] in Factor it was [ [ fix ] curry ] keep call but Cat defines keep differently [19:10:58] 3 [+] curry ---> [3 [+] [id] dip id apply] [19:11:01] what a bloody mess that is [19:12:05] it does the right thing though I suppose [19:13:37] oh wait it's still not point free since it's recursive :S [19:15:26] same type as before at least [19:16:07] define fix { dup [ [ fix ] curry ] dip apply } [19:26:28] * rudha [rudha!~rudha@p200300760E7BD9F2B590D31D7F016BAA.dip0.t-ipconnect.de] has quit (Quit: Leaving). [19:31:39] You're trying to write the Y combinator? [19:32:26] I haven't tried that [19:32:35] I wish there was a common vocabulary for concatenative combinators. [19:32:55] yeah it's a great source of confusion [19:33:34] Forth is the most authoritative, but anything but basic stack shuffling doesn't apply to my language. [19:33:41] I tend to go with Factor's definition as authoritative but it isn't really [19:33:56] oh really? that's weird [19:35:05] define curry(x f) { [ x f apply ] #metacat } << that #metacat thing works wonders but it *still* doesn't get rid of the apply! [19:35:22] Forth isn't very functional, and doesn't have higher level combinators. [19:36:03] I never was enticed by Forth [19:37:19] It's just old and popular, so most likely that someone would understand names taken from it. [19:38:43] It's funny that I like concatenative languages because naming is hard, only to find that naming combinators is *really* hard. [19:39:23] Naming a function is easy enough, though. [19:41:01] Here's my library so far: https://github.com/HackerFoo/poprc/blob/master/lib.ppr [19:41:19] Sometimes I just use symbols. [19:42:22] why not .popr for extension? [19:47:51] suppose I could show you mine [19:49:30] Maybe someone might want to run the compiler on DOS? I don't know, .ppr looked nice. [19:50:33] There's only a few Popr source files in existence, anyway. [19:52:34] hackerfoo: http://dpaste.com/16G6B5H [19:52:58] I think I have a dozen or so .star files [19:55:05] Are you working on an implementation of the language currently? [19:55:43] no probably not very soon neither [19:57:26] this is with the original type system btw [19:58:28] I had to refine the semantics of my language quite a bit to achieve the goals I set for it. [19:59:25] PoprC is the second implementation, the first is here: https://github.com/HackerFoo/peg [19:59:35] I started on a newer one with dependent types hoping to be able to add proof stuff into it but I think with non-termination that's a bit useless [19:59:55] The language changed quite a bit with each implementation. [20:01:19] the idea is without any implementation there's less baggage when changing the language [20:01:53] doesn't feel like it's quite settling down yet in the design [20:02:06] You can have total functions that process codata e.g. streams. [20:03:28] I'm going to try to make my language provably terminating, but that's not a primary goal. [20:04:10] I have a concept of streams in the language but not very formalized [20:04:16] mietek on ##dependent makes a good argument for it. [20:04:57] it makes programs harder to write [20:06:40] I want to make writing correct programs easier, and wrong ones impossible :) [20:07:56] I want so you can write programs in the types [20:07:56] and have proof obligations that they match the code [20:07:56] just not sure how to go about it [20:08:06] We'll see. My compiler already does termination checking, so I can just improve it. [20:08:53] My approach is to unify the term and type languages by using assertions, and then proving that they are unreachable. [20:11:18] termination checking is hard is it not? [20:15:18] I think you just need to identify the variables that change in recursion, and then just prove that they move towards the base case every step. [20:15:49] what if you move past it? :) [20:16:01] I already do the identification part, so if nothing changes, the compiler gives an infinite recursion error. [20:16:52] I'm simplifying quite a bit. There's probably a mathematical description based on lattices. [20:18:19] so does it work if you have something like an odd number and keep subtracting 2 until it reaches zero (never)? [20:18:20] I have okay mathematic intuition, but I'm terrible with terminology. [20:19:17] You'd have to inductively prove that you'll hit the base case. [20:20:39] I've found that writing a compiler is all about cheating, though, so it doesn't have to be perfect right away. [20:21:01] I still haven't really got my head around how you encode such proofs in the language [20:21:06] As long as it's possible in theory. [20:21:52] yeah my language design is pretty loose so I think there'll be a lot of special casing and things that might not work [20:22:43] I try to move towards making things more consistent and computable [20:23:31] the 'laziness' thing can be quite tricky [20:24:41] I want something like {1 [dup] loop} to have a type of {int [dup] loop} and still be valid even though it technically doesn't terminate [20:25:55] I feel like I need to go back to using some kind of stack effects notation since avoiding it seems cumbersome [20:29:56] hm quite recent http://prl.ccs.neu.edu/blog/2017/03/10/type-inference-in-stack-based-programming-languages/ [20:33:27] * MDude [MDude!~MDude@pa-67-234-94-147.dhcp.embarqhsd.net] has quit (Quit: Going offline, see ya! (www.adiirc.com)). [20:34:19] Regular Expression Order Sorted Unification <<< now that sounds interesting [20:35:40] given that I have regular expression sort of operators in the type system already [20:54:30] I don't like "stack based" since it implies that there is a stack, and if you have a stack, it serializes all operations limiting laziness and parallelism. [20:55:02] my idea for starpial is that the side effects don't have to be serialized [20:55:37] the stack is conceptual really [20:56:05] PoprC parses the source into a graph, which requires fixed arity, which means there is no access to the "stack". [20:56:06] it is there but it doesn't describe how the computation actually happens [20:56:26] data dependency graph? [20:56:33] Yes [20:57:00] it's only really possible to do with dependent types... part of why I want them [20:57:14] e.g. 1 2 3 4 5 3 popn -----> 1 2 [20:57:37] the arity is dependent on a value [20:58:56] in my current dependent system that would be described with type of [popn]?[\x?uint x popn] [20:59:47] I found a lot of structural stack programs have types equal to themselves [21:00:00] but I don't know if how I'm doing this makes sense [21:00:01] I don't think dependent types help with arity, since it will need to be known even for type inference. [21:00:42] I don't think the type would be inferred the user would specify it [21:01:03] and then have a proof obligation to prove the type actually describes the program [21:01:13] that's what I'm aiming for if I can get there [21:01:34] The problem I arrived at, is that if you have a combinator, such as map, and it can mess with the stack, it's hard to even know what map will do without knowing the arguments. [21:01:51] It may eat the stack, for example. [21:01:55] it's fine if you restrict the type of map [21:02:04] thereby making it useful [21:02:17] Which you will have to do for all combinators. [21:02:49] Then it's not much better than fixed arity, but much more complicated. [21:03:00] You'd have restricted arity. [21:03:24] doesn't have to be [21:03:35] here's my definition from my prelude [21:03:44] @map?[{>~t}[~t:*]:{>}]: {#{x>xs}#f x f# map(xs,f)..}`{#_#_}# [21:04:08] I just handle this with combinators such as apNM, which applies a quote to N inputs producing M outputs. [21:04:51] my map has a type and yet you can use it with something like [dup dupn] map [21:05:01] which turns {1 2 3} into {1 2 2 3 3 3} [21:05:26] it doesn't eat the stack because the stack effect says the function passed to map isn't allowed to [21:05:38] it's only allowed to consume one thing [21:06:28] Anything that can, for example, depend on the size of the stack, breaks the concatenative property. [21:07:04] it doesn't depend on the size of the stack [21:07:21] Is { ... } a quote? [21:07:29] no it's a list [21:07:46] well how it works in my language is that the inside of the { } is a stack while you're inside it [21:07:51] Oh. There's no difference in Popr. [21:08:27] I wanted quotations to be abstract things that can be optimized [21:08:41] or compiled [21:09:05] Okay, to expand the rule: you can't observe the stack from the inside. You can for example determine the size of a quote using code outside [21:09:16] the quote. [21:09:30] right, and that's fine right? [21:10:10] But you can't count the number of lexical words in the quote, it always behaves as if it were fully reduced. [21:10:25] btw, the other main reason I need lists and quotes to be separate is that my language has side effects and inside {} everything is evaluated immediately whereas in [] no side effects can occur until it is called [21:11:25] These are strict rules that can't be broken, or they make the semantics shaky and efficient compilation hard. [21:12:05] so what's the length of [1 +] ? [21:12:33] with {1 +} that's simply a stack underflow [21:12:46] Heh. That destroys the universe. [21:13:35] My language has set-values. If you're familiar with Prolog, the answer is no. [21:14:27] can you have [1 [2] [] | call] in your language then? [21:14:43] with length of {1,2} as a set [21:15:38] I think of | as dup'ing the universe to a parallel one, and 'False !' as destroying the current one. You can't reach another universe. [21:16:07] I think that makes it impossible to do something like 'findall' from Prolog [21:16:37] I thought my language was the same until I realised you could unify two sets [21:16:42] So, "[1] [] | head" produces "1". [21:17:15] You can try it online: http://hackerfoo.com/eval.html [21:17:36] cool [21:18:11] but yeah without findall you can't even do simple things like find all descendants [21:18:45] I think they were called 'extra-logical' in Prolog [21:19:04] yeah extra-logical predicates [21:20:07] Yes. Don't want to repeat Prolog's mistakes, by exposing the search implemention to the language. [21:20:28] If you want search in my language, you will need to implement it. [21:20:54] I'm just saying it makes the logical feature a lot less useful [21:21:08] This allows me to freely choose reduction tactics, or even let them be user specified. [21:22:08] It's not a feature. The main feature is partial evaluation and efficiency. Most of the other features followed from that. [21:22:22] Or I mean it's not the main goal. [21:23:16] do you have anything like unification variables? [21:23:29] I have to have very strict semantics because of this. I don't really make decisions about the semantics, I just discover rules I need to follow to make it work. [21:24:13] No. You can kind of fake it with alternatives, though. [21:24:59] it doesn't just compute each execution path independently does it? [21:25:40] It's like the List monad, if your familiar with Haskell, or LINQ or list comprehensions. [21:26:06] sounds like yes [21:26:19] No, it reuses already computed values because of laziness. [21:27:01] I'm not sure how lazy it is [21:27:16] but I was thinking you could have the set of all strings as alternatives [21:27:32] It's ridiculously lazy. [21:27:49] so the alternatives take up no memory or computation until used? [21:28:00] That's the idea. It's just obviously infeasible. [21:28:33] is it though? [21:29:33] You can try ":by f: " to see how it works, which defines and compiles f [21:30:17] Well, maybe not. I'm not sure. Maybe I could make it work. They'd have to be generated lazily. [21:30:17] * rgrinberg [rgrinberg!sid50668@gateway/web/irccloud.com/x-rfaanlexzuaeqykp] has quit (Ping timeout: 240 seconds). [21:30:19] * puckipedia [puckipedia!puck@puckipedia.com] has quit (Ping timeout: 240 seconds). [21:31:19] * puckipedia [puckipedia!puck@puckipedia.com] has joined the channel. [21:31:54] I think you can generate infinite alternatives, but they have to be wrapped in a quote so you can unfold them one by one. [21:32:30] sure [21:33:04] Like I said, I don't really get to pick the semantics, they are more discovered, like math. [21:33:33] that's kinda how I've been working as well [21:33:59] I say 'how would I write this in my language?' then keep trying to write stuff until I get stuck, then need to rethink [21:34:15] and you realise which things need to be true to make it work [21:34:25] I didn't even have quotations originally [21:35:15] I didn't want to base my language off of others particularly except to steal ideas [21:36:35] hackerfoo: can I define a function in popr? [21:39:10] I tried like you have in the .ppr file but it just gives parse error :S [21:41:05] You have to put it in a module. [21:41:20] so: [21:41:28] module some_module: [21:41:50] add1: 1 + [21:41:56] I get Parse error on saying module [21:41:57] [21:42:04] I'm using the interpreter [21:42:31] Oh. In the interpreter, you can use: [21:42:39] :def add1: 1 + [21:43:33] how do I write a string or character? [21:44:02] or is this an integers only language [21:44:47] No strings yet, although there is some internal support; it's not finished yet. [21:45:22] You can use lisp style atoms, though. They are words that start with an uppercase letter, like True. [21:45:40] ok [21:46:30] btw, shift+backspace is a bit brutal [21:46:32] ASCII characters would be easy to add, but I don't want to get distracted with things like unicode yet. [21:46:37] * MDude [MDude!~MDude@c-73-187-225-46.hsd1.pa.comcast.net] has joined the channel. [21:46:49] fair enough [21:47:21] although if you just use UTF32 it's not really difficult to support unicode [21:47:36] in a horrible and inefficient but workable way [21:49:54] My strategy is similar to C: if it's inefficient, don't do it. [21:49:58] I'm seeing weird stuff like [USER]: longjmp and ?01119 [21:50:12] Which is why implementation is an important part of the language design for me. [21:51:16] What is the source that caused it? Can you dump the log? ":log" [21:51:41] too late I already got it into a spin loop [21:52:20] woah got something more impressive [21:52:27] The web version isn't as stable as the native version, because Emscripten is very weird. [21:52:41] all I did was :def test: A B | [21:52:48] dunno if that's a valid function [21:52:57] then I try to use it and weird stuff happens [21:53:32] like I have to put a number like 1 test [21:54:07] I'll try making the native version.. [21:54:27] Oh, I get "incomplete expression" in Chrome when I try to execute test. [21:54:39] yeah try it with a number on the stack first [21:54:42] It's because of a bug I fixed recently. [21:55:14] Where you can't invoke constant functions. I can upload a fixed version. [21:55:52] [jason@j4s0n4rch poprc]$ ./poprc [21:55:53] /usr/bin/ld: cannot open output file poprc_out/: Is a directory [21:56:26] You need to call ./eval instead. [21:56:35] yeah already got there :) [21:57:11] ok this looks more promising [21:57:19] poprc is the compiler driver: ./poprc algorithm.sum will generate a binary at ./poprc_out/algorithm_sum [21:57:48] cells.c:380: as_single: Assertion `k < (sizeof(alt_set_t) * 4)' failed [21:59:10] And then ./poprc_out/algorithm_sum [1 2 3] will print [21:59:12] algorithm_sum => 6 [21:59:55] That's expected. It's an ugly way of saying that you've exhausted the 32 alternatives. [22:00:06] I only used 26 though [22:00:54] You can get statistics with :measure [22:01:29] I can also get a division by zero with :measure at start up :P [22:01:39] Yeah, I just saw that. [22:02:04] -fsanitize=undefined is awesome by the way. [22:02:37] boys cold-ones [ crack ] with each [22:03:14] I've got Pepsi [22:03:32] So there are only a small number of alternatives in the evaluator, but this turns out to not be much of a limitation, because it's the equivalent of nesting 32 if's: not really necessary. [22:03:57] The compiler eliminates alternatives, so there is no limit in generated code. [22:04:32] I still don't understand how 26 alternatives is more than 32 [22:05:05] Are you on a 32-bit system? [22:05:22] Then you only get 16 :) [22:05:39] yeah you guessed right [22:05:49] I should get a new PC [22:06:21] but I can still use the latest everything in 32-bit so... [22:06:31] There are a lot of limitations that don't yet get hit in practice, unless you're just testing the limits. [22:06:50] well all i wanted to do is represent the letters A-Z [22:06:58] For production, of course, I'll need some sort of lower performance fall back. [22:09:14] That's not really how it's meant to be used. [22:09:35] You could instead generate numbers lazily, as discussed above. [22:09:43] I'll try to write it. [22:16:18] Well, I think this should work, but some optimizations I've made recently seem to have made it insufficiently lazy: [22:16:25] ladd: [1+ ladd] [dup] dip12 pushl [22:17:01] Oh wait, I forgot the alternative. [22:20:49] Well, I might need to get back to you on this, but it's possible at least with the language to make a stream of alternatives. [22:21:07] But the compiler right now gives segfault instead. [22:21:31] never mind, I was about to give up anyway [22:21:47] I think I'll go back to thinking about the type system [22:30:54] hackerfoo: in my language it's currently ok to do something like "2 call" ----> "2" what do you think about that? [22:31:27] based on the intuition that every value is a function that puts itself on the stack [22:31:42] kernelj: Thanks for trying poprc out. It's always interesting to see what new users do. I think I'll move the poprc script and maybe rename eval to popr-interpreter, and also dump logs and backtrace to a text file on crashes. [22:32:10] I don't know what call does. [22:32:36] executes a quotation [22:37:08] and you may say but 2 isn't a quotation and that is kinda the point [22:37:08] I think the nearest equivalent in my language would be [1] ap01--> [] 1 [22:37:08] Yeah [22:37:08] it means you could do "2 call call call call" and it still evaluates to "2" [22:37:08] Also, how do you know how many arguments does call consume/produce? [22:37:09] actually in starpial call is a primitive and is written simply # [22:37:10] so that would be "2 # # # #" [22:37:12] it takes however many arguments the quotation takes and outputs however many it leaves [22:37:12] again in my weird dependent type system I ended up typing # by itself [22:37:12] which is why I think there's something wrong in how I'm doing things [22:37:12] * flogbot [flogbot!~flogbot@2001:4800:7814:0:2804:b05a:ff04:4ba7] has quit (Ping timeout: 245 seconds). [22:37:19] * flogbot [flogbot!~flogbot@2001:4800:7814:0:2804:b05a:ff04:4ba7] has joined the channel. [22:37:19] :wilhelm.freenode.net 353 flogbot = #concatenative :flogbot MDude puckipedia otoburb FreeFull jtimon kernelj shmibs PiDelport bmp jeremyheiler Sgeo kanzure dustinm`_ strmpnk hackerfoo m_hackerfoo diginet doublec groovy2shoes ephe_meral jeaye rjungemann merry koisoke erg shachaf earl_ carvite flout rotty [22:37:19] :wilhelm.freenode.net 366 flogbot #concatenative :End of /NAMES list. [22:37:32] I don't see why this would be useful or desired. [22:37:58] this brings us back to my original observation that 'call' and 'id' are quite similar and indeed the same in Haskell [22:38:38] it allows you to use values in contexts where you would normally be forced to write a quotation [22:38:48] I think I use that fact without thinking about it [22:39:03] In Haskell, application is denoted by adjacency, so this is misleading. [22:39:44] it's like saying call(id(f), x) == id(call(f, x)) [22:41:10] can't tell if you intended to use tuples there [22:41:35] C-style calls, to make it more explicit. [22:42:07] or call f (id x), id (call f x) [22:46:58] they are the same though right [22:48:28] there's nothing to stop you writing "id f (call x)" "call (id f x)" [22:50:36] maybe partly not being able to call anything is just an inconvenience and you have to treat it like some sort of error [22:51:02] also I remember now there's a problem when binding things to names that you want to call automatically [22:51:27] you shouldn't have to enclose a value in a quotation to bind it to a name [22:52:14] my binding operators simply take the item at the top of the stack and bind it to a name [22:55:57] I only bind things at the top level. There's actually a second language for binding, called the module language. [22:56:37] It could be used separately from Popr, as a configuration language for example. [23:00:15] The only operation in the module language is binding a (list of) values to a name, and must occur within a module. Modules can not be nested, but can refer to other modules. [23:02:01] Every function in Popr has a fully qualified name of module.name, but the module can be omitted if it is within that module. [23:03:31] One special case is the value import. Any value in a module referenced in import is considered to be in the parent module. [23:05:17] Modules are also values, so they can be listed to the right of any name, so you can have something like a.b.word, where a.b = c, and c.word exists. [23:08:48] yeah I don't have any module system so far I only have objects where you can add fields to a stack [23:10:14] I highly suggest at least moving top level binding out of the expression language, so that you don't have to deal with the semantics of evaluating a top level binding, and dealing with the order of that. [23:11:05] you'll have to give some reasons or I won't do it ;) [23:11:06] Also, it can make compilation difficult, because the top level would have to be evaluated at compile time in a different context. [23:12:05] I haven't found any specific problems with it yet, but then I haven't tried writing a compiler [23:12:26] consider that you could execute the program as a script in an interpreter... [23:12:33] In the module language, the order of bindings don't matter, so you can define mutually recursive functions without forward declaration. [23:13:17] I can do that without any forward declaration already [23:13:29] I wanted to do that, because one language seems better than two, but I realize that separating them makes them both better and simpler. [23:13:42] to implement or to use? [23:13:51] Both. [23:14:00] I'm not convinced [23:14:41] I also wanted to make the evaluation language purely concatenative, so that it binds nothing. [23:14:52] plus I need to define (mutually) recursive functions within an object [23:15:16] As soon as you allow binding, you need to pass around an environment dictionary, which is expensive. [23:15:34] at compile time only [23:16:48] If the top level is the same as the rest of the evaluation language, you could use IO, or write something that doesn't terminate at compile time. [23:17:28] It would be possible to prevent that, but it's just another complication. [23:17:31] the whole program is a quotation that gets compiled [23:17:47] as I said before side effects don't happen inside a quotation [23:18:31] Another problem is just that if the top level is a program that builds the environment, it's cumbersome to write that program. [23:18:57] And then you must allow binding inside functions, or it's inconsistent. [23:19:05] of course [23:19:20] it's a scoped binding [23:19:48] I've seen other concatenative languages allow "local" binding, and elegance is traded for expediency. [23:20:20] Soon it just becomes an ugly applicative language. [23:21:30] maybe have a look at the sample program I posted again and see if you still think it's ugly http://dpaste.com/2984BB2 [23:22:54] Oh, if that wasn't enough, a stray token could break the whole thing with weird error messages (or none at all.) [23:22:55] * rgrinberg [rgrinberg!sid50668@gateway/web/irccloud.com/x-zuyncygndalwgxgm] has joined the channel. [23:23:14] what do you mean by stray token? [23:24:10] So say I use an earlier syntax I considered, with :def as a word that binds a quote to a word: [23:24:31] [1+] "add" :def [23:25:05] in starpial that would be: [1+] @add [23:25:10] [1-]. "sub" :def [23:25:37] oh yeah I meant to ask what that . does? [23:25:40] Did you see the stray dot? What kind of error message would that print? [23:25:56] It's composition: [1] [2] . --> [1 2] [23:26:40] I don't think it would print an error :P [23:26:58] anyway that's why I have the special syntax in starpial [23:26:59] the :: [23:27:15] @word:: code goes here [23:27:16] It's kind of cool to do this, because it allows meta programming, but I've come up with a better way to do it in the module language, because listing multiple modules merges the bindings. [23:27:19] desugars to [23:27:28] (code goes here) @word [23:27:40] the () is an assertion that it's a single value added to the stack [23:27:41] Kind of like ML functors. [23:28:37] you can do multiple inheritance in starpial to get the same effect [23:28:45] if I understood you correctly [23:29:13] { moduleA <*> moduleB <*> } [23:29:42] oh and then a <* so you don't have to refer to the object with .thing [23:31:23] It gets more interesting when you leave something undefined in moduleA, and define it instead in moduleB, so you could use a module as an interface. [23:32:19] I hadn't thought of doing that [23:32:54] So it allows you to use names as parameters that can be bound in another module. [23:33:28] right, I don't think that's possible in Starpial as it is [23:35:38] you would probably use a compositional approach where you pass in the module as a parameter [23:36:20] viz. [#othermodule { othermodule <*> code }] @mymodule [23:36:57] I can see problems with this [23:37:19] that's probably where the type system comes in though [23:38:53] I haven't implemented it yet, but it should just be a map merge on the environment, before type inference/partial evaluation. [23:39:56] Actually, it is implemented for import, I guess. I just don't yet have a way to mark something undefined yet. [23:40:22] I have a way to mark something undefined but that creates a unification variable [23:45:33] Well, it tried to work. Yet another bug, I guess :( [23:47:04] yeah that's the thing with implementations, they have bugs :) [23:47:12] no implementation, only design bugs :( [23:47:26] I hate when the syntax clashes [23:47:50] I wanted to be able to describe list types using {>int} for example [23:48:13] but that clashes with syntax for exporting a binding called int [23:48:29] I have {int*} though [23:48:55] I'm not sure how useful showing the direction of strictness is, or whether reverse strictness even makes sense [23:50:11] @rev?[{>}:{<}]: {#{x>xs} xs rev.. x}`{#_}# [23:50:28] @revr?[{<}:{>}]: {#{xs I'm not sure what you mean by directional strictness. [23:52:20] one end is lazy [23:52:57] Are you talking about a cons list vs. a snoc list? [23:53:25] I'm not sure what that is [23:54:08] data List a = Cons a (List a) [23:54:10] anyway if you replace x with x.. in the above I can forsee you'd have issues [23:54:28] data ListR a = Snoc (ListR a) a [23:55:03] I'll say the answer is yes, since it looks like what I mean [23:55:51] To be honest, I can't read the above expressions very well. There are a lot of symbols I don't understand, so I'm kind of guessing.