Micael Bulow's needs. (Shouldn't there be an umlaut on the U?).
Have I missed or misunderstood anything important?
It seems to me that Fido is nearing the end of its
useful life, and that the job of writing new nodelist compilers and editors that would use XML instead of ASCII is probably not worth undertaking.
On the other hand it would be a relatively simple matter to
write a utility that would convert the current nodelist to HTML or
XML in much the same way as present nodelist compilers produce
human readable lists now. This might satisfy Micael Bulow's needs. (Shouldn't there be an umlaut on the U?).
Have I missed or misunderstood anything important?
Just to make things clear; It's not MY needs that needs to be satisfied. I'm trying to point out the things that I belive is the best for the Fidonet community in the future.
Maybe the people in Fidonet have to let go of some of their old
structures to attract new members and developers. (Of cource there will allways be utils to make things backward compatible)
But I might be wrong.
You set a frame that we should use. Only they tell
us that there is a problem when working towards XML
and not start from it.
To me it looks like going from London to Penzance
via Edinburg.
Tag, Mijnheer!
You set a frame that we should use. Only they tell
us that there is a problem when working towards XML
and not start from it.
To me it looks like going from London to Penzance
via Edinburg.
OK, Jan. I made what seemed to me a sensible suggestion that
could lead to a workable compromise. If what I said is not useful,
then "let him who knows best speak".
I would find it useful if someone would explain to me why XML
is needed now after more than a decade running successfully without
it.
I do not understand why there would be a problem with a utility
that produces an XML list from the nodelist.
The nodelist cannot be significantly altered or superseded while we
are still using the term FidoNet anyway. To do so would just cut off everyone who depends on the nodelist as it is.
The list produced by the utility would be in XML already. Then
they are not working towards XML but starting from it.
It sounds to me as if people are making difficulties that do not
really exist, but perhaps my thinking is too lateral in this matter.
I would find it useful if someone would explain to me why XML
is needed now after more than a decade running successfully without
it.
SLF doesn't scale.
Most mail a decade ago was via PSTN, and the handful that weren't
were able to handle 'manual' arrangements easily. IP is fast
becoming the majority, and SLF just can't handle the data.
I do not understand why there would be a problem with a utility
that produces an XML list from the nodelist.
Because it would be useless - it would contain the same data as SLF.
The whole point of a new format is to allow addition of MORE
data, AND in a more structured format so as to allow future
expansion without kluges or abiguity.
The nodelist cannot be significantly altered or superseded while we
are still using the term FidoNet anyway. To do so would just cut off
everyone who depends on the nodelist as it is.
Extracting the subset of information that is supported by SLF from
a superior format is trivally easy. Nobody has ever suggested
cutting off those that depend on SLF.
The list produced by the utility would be in XML already. Then
they are not working towards XML but starting from it.
More or less. It's the only way it can work.
It sounds to me as if people are making difficulties that do not
really exist, but perhaps my thinking is too lateral in this matter.
There are a disturbing number of people worrying about the sky
falling down as well.
It may upgrade to using the data from ESLF.[snip]
ESLF will contain all data one ever would need; XML may extract whatever it needs.
If the developers will get serious and give priority to serving the
net.
The list produced by the utility would be in XML already.
Then they are not working towards XML but starting from it.
More or less. It's the only way it can work.The problem seems to be that the XML developers do not see how
they could extract that data. As if string parsing would be a PITA
(even BASIC could do that in the early eighties...).
Which is by far less probable than losing a few legacy nodes in
the process.
ESLF will contain all data one ever would need; XML may extract
whatever it needs.
If you're thinking that we should use [E]SLF -> XML until everyone
can use XML, forget it. It won't work. There's little incentive
to use XML at all in that scenario.
If the developers will get serious and give priority to serving the
net.
Huh?
The list produced by the utility would be in XML already.
Then they are not working towards XML but starting from it.
More or less. It's the only way it can work.
The problem seems to be that the XML developers do not see how
they could extract that data. As if string parsing would be a PITA
(even BASIC could do that in the early eighties...).
Huh?
Which is by far less probable than losing a few legacy nodes in
the process.
You've been told repeatedly this won't happen. Pay attention.
If you're thinking that we should use [E]SLF -> XML untilXML is good for local, not for global wide use.
everyone can use XML, forget it. It won't work. There's little
incentive to use XML at all in that scenario.
If the developers will get serious and give priority to serving
the net.
Huh?Don't you see what a "they must adopt" attitude really means?
The problem seems to be that the XML developers do not see
how they could extract that data. As if string parsing would be
a PITA (even BASIC could do that in the early eighties...).
Huh?It was said it wuld be a PITA to extract data from (E)SLF for
building an XML base.
There is a small but very significant difference between "look
we've got some nice new software for you" and "you do no need to
upgrade at all".
A difference of 100 nodes, perhaps?
If you're thinking that we should use [E]SLF -> XML until everyone
can use XML, forget it. It won't work. There's little incentive
to use XML at all in that scenario.
XML is good for local, not for global wide use.
Huh?
It was said it wuld be a PITA to extract data from (E)SLF for
building an XML base.
You've been told repeatedly this won't happen. Pay attention.
There is a small but very significant difference between "look
we've got some nice new software for you" and "you do no need to
upgrade at all".
... because SLF is vague, with many conflicting 'standards' and
broken implementations.
Fixing them is obviously a good idea,...
... but David Drummond, etc. have been harping on about broken entries
for months and nothing significant has happened.
And then there are issues that cannot be fixed without breaking software.
It's far easier to start with a clean format, and convert the data
to the broken format, than start with a broken format and try clean
it up automagically.
If you're thinking that we should use [E]SLF -> XML until everyone
can use XML, forget it. It won't work. There's little incentive
to use XML at all in that scenario.
XML is good for local, not for global wide use.
Please explain what mean and give as much detail as possible.
It was said it wuld be a PITA to extract data from (E)SLF for
building an XML base.
Who has said that? What I've said is:
A: There is no (E)SLF
B: It would be a waste of time to continue to kludge the nodelist.
It is best to start fresh and new and convert data back for those that must continue to use SLF
You've been told repeatedly this won't happen. Pay attention.
There is a small but very significant difference between "look
we've got some nice new software for you" and "you do no need to
upgrade at all".
This is what is known as FUD. No one has to upgrade software. See
point B above.
You seem to have a complete view of (1) what is vague in SLF, (2)
what standards are conflicting between them and how they conflict and
(3) which implementations of what are broken.
I must confess that bt now I'm off the track. Would it be
possible for you to make us a resume?
So things have indeed happend but they are considered insignicant.
Would you, or David, a list of them?
And then there are issues that cannot be fixed without breakingTel me which, how and where, please.
software.
Its easier, but we do not like leaving a lot of debris, so let's
look at the difficult way, won't we?
any other list than the NodeList can only be global if all *C's
have the capability and the ability to build their segments in
the 'new' format.
Without those conditions, such a list will fail to be acceptable.
[ 24 Dec 02 23:05, Jan Vermeulen wrote to Dale Ross ]
any other list than the NodeList can only be global if all *C's
have the capability and the ability to build their segments in
the 'new' format.
Without those conditions, such a list will fail to be acceptable.
Hence a top-down approach. Those at the top will need to be the
first to use the new software. Segments submitted as SLF remain
subject to SLF's limitations, but those submitted as XML won't. As
the XML software spreads among the *Cs, more of the XML nodelist
will be XML native. SLF nodes will not notice any difference,
except perhaps that simple errors in listings will be
auto-corrected by the translation process from SLF -> XML -> SLF.
Of course you will nead consencus at that level...
Of course you will nead consencus at that level...
Not necessarily. XML capable Nets, Regions or Zone can issue their
own XML Net/Region/Zonelists, without the cooperation of other Nets/Regions/Zones.
However, if XML gets popular, luddite *Cs will simply find
themselves out of a job as XML capable systems will 'route around
the damage' by organising their own distribution channels.
Not necessarily. XML capable Nets, Regions or Zone can issueYou will still need to send in your standard nodelist segment to
their own XML Net/Region/Zonelists, without the cooperation of
other Nets/Regions/Zones.
the RC or ZC in order to get listed.
themselves out of a job as XML capable systems will 'route aroundWhy don't you start your xmlnet right away, if that is what you
the damage' by organising their own distribution channels.
want?
Sorry, Bill, I did not mean you, I meaned them;
I'm in agreement with you.
Not necessarily. XML capable Nets, Regions or Zone can issue
their own XML Net/Region/Zonelists, without the cooperation of
other Nets/Regions/Zones.
You will still need to send in your standard nodelist segment to
the RC or ZC in order to get listed.
Yes, so?
Again, and again, and again: backward compatibility is a given, it
will not work any other way. We know this. Get over it.
themselves out of a job as XML capable systems will 'route around
the damage' by organising their own distribution channels.
Why don't you start your xmlnet right away, if that is what you
want?
Alternate distribution of XML nodelist segments doesn't require a
new network.
And then you get back a standard nodelist - which you will need to convert into XML
in order you have a complete nodelist as required by policy.
Again and once more: you will need nodelist-to-xml software, you
know, that software they said is a PITA to code.
[ 29 Dec 02 17:01, Jan Vermeulen wrote to Scott Little ]
And then you get back a standard nodelist - which you will need
to convert into XML
True, but probably not in the way you think, as long as the
software is correctly written. I can take that SLF nodelist, and incorporate it into the XML nodelist, filling in the missing
pieces. Only those nodes that don't have a native XML listing
will need to be converted
This is where the alternate-distribution comes in. If some *Cs
don't distribute XML segments, XML systems will find alternate
means by which to compile a more complete XML native nodelist,
with less converted parts
in order you have a complete nodelist as required by policy.
Eh, what? Which part of Policy 4.07 requires every node to have a
full copy of the nodelist (as issued by the IC)
Again and once more: you will need nodelist-to-xml software, you
know, that software they said is a PITA to code.
Users of XML or any alternate nodelists will have to accept that
there may be inaccuracies in the converted portions, such as a
system's name with a dot in it ending up in the domain name field
as well. Such nodes can be flagged as suspicious during the
conversion, and treated with caution by XML software
Bye <=-
[ 29 Dec 02 17:01, Jan Vermeulen wrote to Scott Little ]
And then you get back a standard nodelist - which you will need to
convert into XML
True,...
... but probably not in the way you think, as long as the
software is correctly written. I can take that SLF nodelist, and incorporate it into the XML nodelist, filling in the missing
pieces. Only those nodes that don't have a native XML listing will
need to be converted.
This is where the alternate-distribution comes in. If some *Cs
don't distribute XML segments, XML systems will find alternate
means by which to compile a more complete XML native nodelist, with
less converted parts.
in order you have a complete nodelist as required by policy.
Eh, what? Which part of Policy 4.07 requires every node to have a
full copy of the nodelist (as issued by the IC)?
And then you get back a standard nodelist - which you will need to
convert into XML in order you have a complete nodelist as required
by policy.
Again and once more: you will need nodelist-to-xml software, you
know, that software they said is a PITA to code.
Users of XML or any alternate nodelists will have to accept that
there may be inaccuracies in the converted portions, such as a
system's name with a dot in it ending up in the domain name field
as well. Such nodes can be flagged as suspicious during the
conversion, and treated with caution by XML software.
So far so good. You admit that a standard nodelist is needed in
order to fill in the holes. In stead of hunting for holes, just take
the whole nodelist and convert it to xml. Less PITA.
You might want to read P4 a bit beyond 2.1.11 and 2.2 in order to understand what I really had in mind.
Well, this is the first time that you admit there will be flaws;
we do make progress, do we not?
You are. You forget those that are already there and are
prefectly satisfied what is on offer right now. Imposing on them
new things they do not need may chase them from the net.
Which, as an RC, I do not want to happen.
I would find it useful if someone would explain to me why XML
is needed now after more than a decade running successfully without
it.
I do not understand why there would be a problem with a utility
that produces an XML list from the nodelist.
XML is good for local, not for global wide use.
XML is good for local, not for global wide use.
XML is good for local, not for global wide use.
Wrong,
XML was made for global exchange of data.
You are. You forget those that are already there and are
prefectly satisfied what is on offer right now. Imposing on them
new things they do not need may chase them from the net.
You got me wrong.
Of cource we have to see to it that everything we change also can
be provided in a backward compatible format for the sysops.
But I think attracting new developers and members is an important
issue today.
ONE of the solutions could be some changes at
toplevels to be able to add techniques widely used today.
After all, the net is geting smaller every day. The main task is to
turn that trend around.
Tue 2002-12-24 18:45, Jan Vermeulen (2:280/100) wrote to Scott
Little:
XML is good for local, not for global wide use.
I don't know where you got that idea.
"XML was designed by the World Wide Web Consortium (W3C) to
streamline data exchange across the Internet.
XML is ... &c
XML is good for local, not for global wide use.
I don't know where you got that idea.
I thought that in the context this would be a matter of course, but
it appears I was mistaken.
I should have said:
"As_far_as_FidoNet_is_concerned, XML is good for local, not for global
wide use."
"XML was designed by the World Wide Web Consortium (W3C) to
streamline data exchange across the Internet.
W3C is not all of the internet. XML may be fine as it is but there
is no single cure for all pains.
Of cource we have to see to it that everything we change also can
be provided in a backward compatible format for the sysops.
Ok, the intention is there. But how sure can you be that not
even one byte will get lost or damaged in the operation?
ONE of the solutions could be some changes at
toplevels to be able to add techniques widely used today.
Explain 'toplevels'. Who? What? Why?
After all, the net is geting smaller every day. The main task is to
turn that trend around.
Do you really think that a nodelist is the place to start?
XML is good for local, not for global wide use.
I don't know where you got that idea.
I thought that in the context this would be a matter of course, but
it appears I was mistaken.
I understood the context, but...
I should have said:
"As_far_as_FidoNet_is_concerned, XML is good for local, not for global
wide use."
You haven't said why.
"XML was designed by the World Wide Web Consortium (W3C) to
streamline data exchange across the Internet.
W3C is not all of the internet. XML may be fine as it is but there
is no single cure for all pains.
I don't think anyone is suggesting it is "the final solution". But
it is probably the best option for a "next step".
But I
am convinced that IF we are going to change the format of the
nodelist, we realy should concider using XML to the deepest before staying with the stoneage format.
I do not understand why there would be a problem with a utility
that produces an XML list from the nodelist.
Of cource we have to see to it that everything we change also can
be provided in a backward compatible format for the sysops.
Ok, the intention is there. But how sure can you be that not
even one byte will get lost or damaged in the operation?
XML is verry strict. Either you have a valid XML file, or you dont.
The risc of an invalid file is much bigger in the old format (which
we have seen to often).
ONE of the solutions could be some changes at
toplevels to be able to add techniques widely used today.
Explain 'toplevels'. Who? What? Why?
I dont know. We dont have a solution yet.
After all, the net is geting smaller every day. The main task is to
turn that trend around.
Do you really think that a nodelist is the place to start?
No. Actually I dont quite get why everyone is starting there.
The whole point of a new format is to allow addition of MORE
data, AND in a more structured format so as to allow future
expansion without kluges or abiguity.
ESLF can do all that, on a need to have basis.
ESLF will contain all data one ever would need; XML may extract whatever it needs.
The problem seems to be that the XML developers do not see how
they could extract that data. As if string parsing would be a PITA
(even BASIC could do that in the early eighties...).
XML is verry strict. Either you have a valid XML file, or you dont.
How do you check validity?
Invalid data in one line do not necesserily mean that the
entire file is invalid.
I want to know who or what are the top levels you meant and why
you considered to start from them (which or whatever they are).
Do you really think that a nodelist is the place to start?
No. Actually I dont quite get why everyone is starting there.
Because the loudest voices are about problems with their nodes
entry; I am still waiting for a list telling me the who, what and why
of those problems.
There's nothing wrong with the SLF Nodelist. The problem is the
implementing of IP. That was not done right to begin with. :(
With PSTN, each mailer is expected to be able to transfer mail at
least at FTS-1 (x-modem?).
With IP, there is no minimum required transfer method. This means that
each protocol (binkp, telnet and such) has to have a flag in the
Nodelist.
To "fix" this, a means needs to be made for IP mailers to determine
the protocol to use during the/a connection. IOW, my IP mailer
contacts your IP mailer and figures out what protocol to use. A
minimum protocol would also be needed which all IP mailers use.
There's nothing wrong with the SLF Nodelist. The problem is the
implementing of IP. That was not done right to begin with. :(
It can be fixed. :-)
With PSTN, each mailer is expected to be able to transfer mail at
least at FTS-1 (x-modem?).
Xmodem with TeLink extensions.
With IP, there is no minimum required transfer method. This means that
each protocol (binkp, telnet and such) has to have a flag in the
Nodelist.
Are there any other common IP protocols other than BinkP & Telnet (ie. FTS-1 over Vmodem or equivalent)?
To "fix" this, a means needs to be made for IP mailers to determine
the protocol to use during the/a connection. IOW, my IP mailer
contacts your IP mailer and figures out what protocol to use. A
minimum protocol would also be needed which all IP mailers use.
You may as well make BinkP the minimum protocol.
... but when doing it the other way around the result would lack all
the "extended" features that the new format otherwise could provide.
If I've understood the proposals correctly the ESLF will add some keyword/value lines before or after each nodelist line. That means
we'll be doing both line-based CSV parsing and keyword/value
parsing. (Hooray! ;-) A pure keyword/value format would be cleaner
and simpler.
The SLF/ESLF format is difficult to use in other forms than the traditional text file.
With some work, the nodelist can be imported into e.g. a database table, but handling the diffs is very difficult.
The fact that you don't need such things right now
doesn't mean that noone else need it either.
ESLF will contain all data one ever would need; XML may extract
whatever it needs.
Yes, it's certainly possible to include much more data in the
nodelist using different tweaks, but it will not be as pure and
simple as a real keyword/value format. Since every piece of
software that wants to use the new data needs to be rewritten we
could as well take a bigger step and fix other issues as well.
BTW, I hope ESLF will use UTF-8 or something similar...?
The problem seems to be that the XML developers do not see how
they could extract that data. As if string parsing would be a PITA
(even BASIC could do that in the early eighties...).
I've written lots of text parsers in many different languages but
that doesn't mean that I always enjoyed it.
In some languages text parsing is a lot easier than in others...
XML is verry strict. Either you have a valid XML file, or you dont.
How do you check validity?
Your XML parser does. You can choose to just check the validity or
you can confirm the structure against a pre-defined schema.
Invalid data in one line do not necesserily mean that the
entire file is invalid.
In XML, it does.
I want to know who or what are the top levels you meant and why
you considered to start from them (which or whatever they are).
When it comes to the nodelist, it would be quite enough to let ZC
keep the base XML file maintained. Perhaps a new util for this
would have to be developed. But thats all.
To start anywhere else (ie at node level) would only create a lot
of confusion and problems.
With some work, the nodelist can be imported into e.g. a database
table, but handling the diffs is very difficult.
Deleting and adding records was already possible with DBASE1 at
the Jet Propulsion Laberatory, so I do not see what would be the
problem, unless you spread your fields all over the place.
Sure. Bytes 0x20 thru 0x7F plus EOF. We'll tackle your name later
;-)
If you want to write code for the net, you first should look at
what the net needs and will able to use; your joy should come from a
job well done, not of the coding itself. That is very much secondary.
... but when doing it the other way around the result would lack all
the "extended" features that the new format otherwise could provide.
Really? Would it be so difficult to take a nodelist, transform it
into XML and then add the extended features?
Where would the extended features come from when starting to build
an XML list to begin with?
Just my humble opinion.
There's nothing wrong with the SLF Nodelist. The problem is the implementing of IP. That was not done right to begin with. :(
That's bad. Loose all new entries when you loose one. We can't
have that.
With PSTN, each mailer is expected to be able to transfer mail at
least at FTS-1 (x-modem?).
Xmodem with TeLink extensions.
I guess. I've always heard FTS-1 and xmodem. Point is, this is the
minimum required for PSTN. Each PSTN mailer must support at least
this.
With IP, there is no minimum required transfer method. This means that
each protocol (binkp, telnet and such) has to have a flag in the
Nodelist.
Are there any other common IP protocols other than BinkP & Telnet
(ie. FTS-1 over Vmodem or equivalent)?
I have no idea. I wouldn't call binkp or telnet common in the respect
of "to every IP mailer", but they seem to be the most common used for
IP transfer of Fidonet mail.
To "fix" this, a means needs to be made for IP mailers to determine
the protocol to use during the/a connection. IOW, my IP mailer
contacts your IP mailer and figures out what protocol to use. A
minimum protocol would also be needed which all IP mailers use.
You may as well make BinkP the minimum protocol.
It really doesn't matter to me what protocol is the minimum.
The point
is that Fidonet needs a minimum required IP protocol for connecting
that each IP mailer can use. Other protocols can be implemented in the mailers as well, but each would at least be able to do the minimum.
The next step would be to figure out how to negotiate the transfer
protocol upon connection.
Just my humble opinion.
There's nothing wrong with the SLF Nodelist. The problem is the implementing of IP. That was not done right to begin with. :(
What you say is that it is not the nodelist that is broken, but
the implimentation of internet protocols and that some repair is
needed. The nodelist then could be used as it is now.
Is that your opinion?
Telnet ac>> (ie. FTS-1 over Vmodem or equivalent)?Are there any other common IP protocols other than BinkP &
I have no idea. I wouldn't call binkp or telnet common in the respect
of "to every IP mailer", but they seem to be the most common used for
IP transfer of Fidonet mail.
I just meant protocols used for FidoNet mail transfer.
It really doesn't matter to me what protocol is the minimum.You may as well make BinkP the minimum protocol.
Well, I mention BinkP because it's by far the most common. Plus you
can actually send mail with it. ;-)
The point
is that Fidonet needs a minimum required IP protocol for connecting
that each IP mailer can use. Other protocols can be implemented in the mailers as well, but each would at least be able to do the minimum.
Yes.
The next step would be to figure out how to negotiate the transfer
protocol upon connection.
This can be done manually until such time that the software can
negotiate automatically.
The problem is how do you know what data to add, update or delete?
The diffs don't say "add 2:204/255..."; they say "copy (ignore) 17
lines", "delete 3 lines", "add the following 4 lines" and so on, so
you still need to have the original nodelist to be able to resolve
the diff into something useful. :-(
Sure. Bytes 0x20 thru 0x7F plus EOF. We'll tackle your name later
;-)
Is that a promise? ;-)
If you want to write code for the net, you first should look at
what the net needs and will able to use; your joy should come from a
job well done, not of the coding itself. That is very much secondary.
Sure, but for me a "job well done" doesn't only mean that it works.
It should be simple, logical, elegant and neat too...
... and bending, twisting and inventing new kludges doesn't fit
very well into that.
... but when doing it the other way around the result would lack all
the "extended" features that the new format otherwise could provide.
Really? Would it be so difficult to take a nodelist, transform it
into XML and then add the extended features?
Where would the extended features come from in this case?
Where would the extended features come from when starting to build
an XML list to begin with?
From the one submitting the data, provided it's submitted using
some kind of "extended features aware" system/format.
That's bad. Loose all new entries when you loose one. We can't
have that.
So use your segment processor to check it before wandering off. If
you don't, and it's broken, the upstream segment processor will
just use the last good one.
Most important it's a widely used standard. All
developers on all platforms can handle the incoming
data.
Let's hope so. There are time limits to, you know.
There has been much correspondence, Micael. I now uderstand why
you think this is desirable. Before saying any more, I have to be sure that doubling the size of the nodelist without doubling the membership
is permissible, and I have doubts about that. It will become clear in
a few days, so I must counsel patience till then.
There are also problems in introducing 'new' elements into the nodelist, because each element has to be examined by the government of each country to which the nodelist is distributed to ensure that it
still complies with each individual data protection statute. At some
point the sum of all the elements may become an invasion of
individual privacy, too.
We are bound by the concept of "annoying behaviour" not to do
anything which could cause the sysop of any node in the net to be
arrested or imprisoned for something outside his control.
Invalid data in one line do not necesserily mean that the
entire file is invalid.
In XML, it does.
That's bad. Loose all new entries when you loose one. We can't
have that.
If I read you well, the ZC needs to make the entire XML
nodelist. That meens that he gets [E]SLF segments, puts them
together to make a GONL and then, only then, he can make the XML
list.
Would XML by any chance abort at the first error found, so you
could be in for a nice surprise when restarting after having
corrected the error?
you think this is desirable. Before saying any more, I have to be
sure that doubling the size of the nodelist without doubling the
We are bound by the concept of "annoying behaviour" not to do
anything which could cause the sysop of any node in the net to be
arrested or imprisoned for something outside his control.
In XML, it does.
That's bad. Loose all new entries when you loose one. We can't
have that.
You don't loose anything, but the parser wont acceppt it.
Would XML by any chance abort at the first error found, so you
could be in for a nice surprise when restarting after having
corrected the error?
An XML parser wouldn't acceppt the XML document at all if the file
is not valid. It would exit with an error message, telling you what
line is invalid.
Let's hope so. There are time limits to, you know.
Time limits?
You don't loose anything, but the parser wont acceppt it.
So it will not go to the next level; that is tantamount of
loosing it this and possibly later weeks.
That was not my question. Let me rephrase it: if the document
would containt two errors, would the parser exit when finding the
first error or would it continue to the end of the document and
prooduce a log of all errors found?
Sat 2003-01-04 21:16, Frank Vest (1:124/6308.1) wrote to Jan
Vermeulen:
It can be fixed. :-)
Xmodem with TeLink extensions.
Are there any other common IP protocols other than BinkP & Telnetprotocol count comment
(ie. FTS-1 over Vmodem or equivalent)
You may as well make BinkP the minimum protocol.
Bye <=-
Time limits?Would you read last week's paper?
That's bad. Loose all new entries when you loose one. We can't
have that.
You don't loose anything, but the parser wont acceppt it.
So it will not go to the next level; that is tantamount of loosing
it this and possibly later weeks. That's why we have the error flag in
the current nodelist: flag it but do not break it...
You don't loose anything, but the parser wont acceppt it.
So it will not go to the next level; that is tantamount of
loosing it this and possibly later weeks.
As I told you, you wont loose the data.
Anyway, if this becomes a problem (I dont see it) it's easy to get
around with some errorhandling.
That was not my question. Let me rephrase it: if the document
would containt two errors, would the parser exit when finding the
first error or would it continue to the end of the document and
prooduce a log of all errors found?
I've used the DOM and it exits on the first error, telling me that
the stream is not well-formed along with the first errorous row.
I haven't had the opportunity to try the SAX parser yet, so I dont
know about that one.
However, as said above, it wont be a problem. It can be worked
around easely.
Nor will it get to the top for integration in that week's and
who knows how mainy next weeks' nodelists. And that is a loss, not
for me, but for the net, because it spells loss of connectivity for
that or those unlisted or wrongly listed node AND all other correct changes that were in that same list. Until I find out when coming
home from a holyday or the hospital.
Capice now?
Then do so and have it tested.
If it's broken it's broken and should be fixed, not just commented
out. Until good data arrives, the last known good data should be
used.
In the future we might not even need to send entire segments as
updates, but only the single node that needs to needs to be
changed, so noone else will be affected.
Nor will it get to the top for integration in that week's and
who knows how mainy next weeks' nodelists. And that is a loss, not
for me, but for the net, because it spells loss of connectivity for
that or those unlisted or wrongly listed node AND all other correct
changes that were in that same list. Until I find out when coming
home from a holyday or the hospital.
Capice now?
No. You have the exact same problem today.
You are expected to work on complete records that have been
arranged in a given order -- what are you planning to do to the
data that you can't locate your records anymore
Bye <=-
If any of the above is a problem, you allways have the
option to not subscribe to the nodelist in XML format.
Everyone that by any reason wish to keep the old
format should have the option to do so.
Maybe you and Jan can redirect those kinds of
arguements to the *Cs...
they are only relevant to them,
and only if/when they are to decide whether XML is to
become the 'official' nodelist format.
This is not the place to argue Policy and suchlike.
As with the rise of IP connectivity, the old-schoolers
can bitch and moan all they like but those that want
to progress will do so regardless, and as long as it
doesn't negatively interfere with other nodes, nobody
can do a damn thing about it.
How does that differ from the current list?
Untrue. Policy 4 affects every sysop in FidoNet.
Don't just bitch about it or pretend it doesn't exist.
This is the NET_DEV echo. If Net development is pointing in a forbidden direction, surely this *IS* the place to say so.
That's the whole point. Anything that impacts the cost of
distribution of the nodelist does interfere negatively with all other nodes. Doubling the size and therefore the cost of distribution of the nodelist is a prime instance of 'annoying behaviour'. That is why
there must be an alternative.
How does that differ from the current list?Even you must be able to see that. Are you a ninny?
You may as well make BinkP the minimum protocol.
yeah it seem most popular, what sort of software licence is involved?
This is the NET_DEV echo. If Net development is pointing in a
forbidden direction, surely this *IS* the place to say so.
You do realise that development generally means leaving people
behind.
We aren't doubling the nodelist. We're making a new nodelist, if^^^^^^^^^^^^^^^^
you want to use it, do so, if not, get scrapped
How does that differ from the current list?
Even you must be able to see that. Are you a ninny?
Bye <=-
How does that differ from the current list?
Even you must be able to see that. Are you a ninny?
you haven't made it that extremely clear... so far by reading your examples it doesn't appear to do anything SLF can't
AFAICT there's no rule against development, but making things worse
for people isn't a good idea.
We aren't doubling the nodelist. We're making a new nodelist, if^^^^^^^^^^^^^^^^
you want to use it, do so, if not, get scrapped
huh?
How does that differ from the current list?
you haven't made it that extremely clear... so far by reading your examples it doesn't appear to do anything SLF can'tEven you must be able to see that. Are you a ninny?
Windows, or Linux computer. When data is written in XML, it
can be transferred between applications, regardless of the
factors that would typically mandate transforming the data
into a useable format."
Windows, or Linux computer. When data is written in XML, it
can be transferred between applications, regardless of the
factors that would typically mandate transforming the data
into a useable format."
And how does that make it different from comma delimited ascii?
4. It's easier to post to an echo. ;-)
Lying is beneath even you, Scott. I said no such thing.
Can't you tell the difference between a question and a statement? Clue: the question ends in a question mark.
Windows, or Linux computer. When data is written in XML, it
can be transferred between applications, regardless of the
factors that would typically mandate transforming the data
into a useable format."
AFAICT there's no rule against development, but making things worse
for people isn't a good idea.
There's a difference between leaving people and their vintage
software behind, and making things worse for them.
He told me earier to "get scrapped" if I didn't like
it.. just returning the favour.
Ooh, name calling.
How does that differ from the current list?
Even you must be able to see that. Are you a ninny?
you haven't made it that extremely clear... so far by reading your
examples it doesn't appear to do anything SLF can't
Then you have not thought about it. SLF has many limiting
factors. FLAG field size is one of them.
How does that differ from the current list?
Even you must be able to see that. Are you a ninny?
you haven't made it that extremely clear... so far by reading
your examples it doesn't appear to do anything SLF can't
Eh, who are you talking to?
Bye <=-
You may as well make BinkP the minimum protocol.
Here it is:[snip]
Bye <=-
You may as well make BinkP the minimum protocol.
yeah it seem most popular, what sort of software licence is
involved?
Here it is:[snip]
Thanks,
Is there a open/free source inplementation of the protocol?
You may as well make BinkP the minimum protocol.
Is there a open/free source inplementation of the protocol?
Sorry, I misread.. you said to get the policy scrapped
if I didn't like it...
Sysop: | digital man |
---|---|
Location: | Riverside County, California |
Users: | 1,029 |
Nodes: | 17 (1 / 16) |
Uptime: | 30:21:40 |
Calls: | 503,736 |
Calls today: | 26 |
Files: | 159,107 |
D/L today: |
11,923 files (1,795M bytes) |
Messages: | 445,015 |
Posted today: | 3 |