Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
#3 is a method of stealing. The contributions are "reimplemented"... which is not necessarily possible for small contributions. And reimplementation of large contributions can take a long time - and causes the code of the "free" version to drift away from the proprietary licensed one, frequently into areas not desired by the company, but desired by the users. Thus becoming twice the support problem. This is partly what happened to MySQL.
Sun started by ignoring the users (it does take time to transfer and get familiar with the code), then started adding proprietary features that many did not want, the free code continued to drift until the original developers created a fork of the free code and added the desired features and contributions... Now MariaDB has pretty much superseded MySQL from Sun (now Oracle), and in slightly incompatible ways.
Free software foundation says you can charge for free software - a distribution charge but then they say that others can charge as much as or more for your software which I find disgusting. FSF I think takes the ownership of the code from the rightful owner and says it now belongs to the person who bought it even though what they purchased was a module not the source. I think it is software socialism. Take from the producers and give it to the nonproducers.
Free software foundation says you can charge for free software - a distribution charge but then they say that others can charge as much as or more for your software which I find disgusting. FSF I think takes the ownership of the code from the rightful owner and says it now belongs to the person who bought it even though what they purchased was a module not the source. I think it is software socialism. Take from the producers and give it to the nonproducers.
It only gives others the same rights to code that you get when you get code. If they make changes and pass it along, then they have to pass along the changes.
Share and share alike.
Any charges are a side issue. Perhaps their reputation of delivery is better, perhaps they have a larger staff that can provide a wider range of services.
Merry Christmas and thank you for all the help you have given me over the years.
I was looking over grammer.y and found the following:
1) I guess your arrays have only two indexes at the most.
3) you start function with the word "function"
7) I see rel_expr ::= expression so I guess rel_expr is higher than expression but I don't understand why you did it that way.
8) Your compiler is amazingly short and to the point. I should make mine that way.
You are very welcome. And a Merry Chrismas and Happy New Year to you.
I believe I did leave out the actual relation testing (which is why the greater than/less than don't work, though the grammar has symbols for them).
The big limitation is the difficulty of handling cascading errors. There is some attempt to resynchronize the parser with various termination symbols (errors in handling an if look for the end - but if it is missing then the EOF gets found instead, errors in an expression look for the end of the expression). But the result makes the parsing code very short - and most programmers I've met only fix the first error - then try again, sometimes scanning through the errors and go "that doesn't make sense" and skipping them until fixing the first error).
The best error handling is usually done be LL parsers (a recursive decent implementation; I believe this is used by the CLANG compiler developers, and is used by GNU C compiler, too). The advantage is that the entire context of the error is available at the time the error is seen, and it is easier to resync - but the disadvantage is a LOT more code. I liked the LR parsers mostly because it was one of the first grammar analysers that I understood - they just seemed to make sense. Most of my uses had been in parsing data structures for input. I've even seen one used to translate network protocols, though a bit clumsy (specifying the tokens was a mess as even those had a tendency to use a lot of flag bits/flag bytes before identification could be made) - and error recovery wasn't very good (though very important, getting back in sync properly is mandatory).
I have found the code at: http://www.textfiles.com/bitsavers/p...Emmy_Sep79.pdf very useful for understanding P code. May use something like it for the abstract machine for my compiler. Find the assembler code pseudo ops for procedure defining and entry/exit ops very interesting.
I have found the code at: http://www.textfiles.com/bitsavers/p...Emmy_Sep79.pdf very useful for understanding P code. May use something like it for the abstract machine for my compiler. Find the assembler code pseudo ops for procedure defining and entry/exit ops very interesting.
The big addition is the offset and frame position for accessing the outer environments. Finding the frame position requires iterating over the stack to locate the right frame, then the offset to access the designated value (the P and Q parts of the instruction). This is what tends to slow down p-code as the loops can take a while as each iteration retrieves the next stack frame, then when the count is finished, the offset is added to the stack frame index retrieved to get the location of the value. Using lots of nested procedures slows the entire thing down as each reference tends to repeat the cycle.
Another reference is Concurrent Pascal (https://en.wikipedia.org/wiki/Concurrent_Pascal). The book includes a compiler/interpreter and provides support for asynchronous operations, it also includes a tiny OS that was implemented using Concurrent Pascal for the implementation language.
Amazon had a book "Concurrent Pascal for the Minicomputer" for $1.83 + $3.99 shipping which I bought. Its ideal intermediate language had 224 op codes most of which were operating system calls. Thank you for bringing concurrent pascal to my attention. Did your Frame Register solve the parsing the stack backwards till you got to the correct frame problem? Maybe an extra stack called the frame stack would reduce the need to look backwards through the main data stack.
No - it was single level, and the grammar didn't allow for nested functions either - just like the K&R C compiler didn't.
It was partly deliberate to keep the VM simple, and allowed for two methods of function calls, one with a full stack frame, and one more usable for intrinsic function usage. The full frame allowed retrieval of parameters, and local variables via the frame pointer. To allow for nested functions would require looping over each frame, retrieving the frame pointer, until reaching the proper level, then using that value as the index + offset to the value. The big problem then becomes addressing an array on the stack... After copying the index, and adding the offset, you now have the base address, then you have to deal with the indexing within that array, and adding the base address.
Adding an instruction to do the stack frame search is relatively simple (and much faster than using the other machine instructions...) but also slows down other possible features. Interrupts for example - either the stack gets a LOT more data making it susceptible to corruption (besides a stack frame/status it also has to hold any intermediate values of long running instructions) or interrupts would have to be disabled for the duration of an instruction (which keeps things simple). The only slow instructions I did define were those used for the formatted I/O (mostly used for testing).
I just thought it simpler to avoid the issue and not use nested functions.
Do you get much sleep - I noticed that you edited your last response at 2:31am and lots of times you are responding at 7am.
Sometimes insomnia. Other times I get side tracked and forget the time and find out it is past midnight. We have a number of cats and they occasionally wake me at inappropriate times as well
Happy New Year! Some questions about your grammar. I guess you did not use parenthesis in your IF and WHILE statements in order to associate them with your function calls? Do you implement both call by value and call by reference? Did you implement IF statement they way you did to eliminate shift/reduce errors? I think you left out not equal (NE) in your rel_ops as you only have five of them and I did not see NE. I enjoy reading your code.
Happy New Year! Some questions about your grammar. I guess you did not use parenthesis in your IF and WHILE statements in order to associate them with your function calls? Do you implement both call by value and call by reference? Did you implement IF statement they way you did to eliminate shift/reduce errors? I think you left out not equal (NE) in your rel_ops as you only have five of them and I did not see NE. I enjoy reading your code.
Nope. just "if <relexpr> then..."
I left out a good bit there as I was focused on getting the tree working. The only operators I did define were all on the lowest level binary operators (GT/LT/EQ/GTEQ and LTEQ). Adding unary NOT operator would just add to the "unary_op" of expressions. And doing it right would require the symantic functions to expand the type handling table.
The grammar structure for "and", "or", "xor" would be just like the arithmetic grammar that separates the multiply and divide from the add and subract operators (it also avoids t he shift/reduce errors). The reference is to the definition of a term (which has the TIMES/DIVIDE binary operator), and an expression. The lower precedence operators PLUS/MINUS are handled as an expression... and a term is then defined as a factor... which is a value or the result of a unary_op... which has a '(' expression ')' definition to have a recursive structure. The same thing happens with the logical/relation references. The recursion occurs with the unary_op so that it doesn't cause the shift/reduce problem.
Granted, this should push the structure of "rel_expr" into "expression" and I hadn't done that. At the time I was focused on handling arithmetic expressions and had not expanded that into also handling boolean expressions. I think this would be done by adding the "and" and "or" to the definition of "term", and adding the NOT to the unary_op, and the relational operators to the "expression" definition.
I'll try to take a look at it again and see about merging some structures into the expression syntax and get rid of the "rel_expr" grammar.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.