Mail Archives: geda-user/2015/12/23/00:15:07
On Wed, 23 Dec 2015, Evan Foss (evanfoss AT gmail DOT com) [via geda-user AT delorie DOT com] wrote:
> On Tue, Dec 22, 2015 at 11:47 PM, John Doty <jpd AT noqsi DOT com> wrote:
>>
>> On Dec 22, 2015, at 4:22 PM, Peter Stuge (peter AT stuge DOT se) [via geda-user AT delorie DOT com] <geda-user AT delorie DOT com> wrote:
>>
>>> John Doty wrote:
>>>>> Bonus points for PCB using the core-library too, so it can "give up"
>>>>> its one preferred on-disk netlist format, and read any useful ones we
>>>>> care to implement a reader for in the core EDA library.
>>>>
>>>> I?m quite skeptical of a core library. An agreed-upon external data
>>>> representation is handy, but tool writers will want their own
>>>> internal representations in their own languages for their own problems.
>>>
>>> The purpose of a core library is to take care of the lower-level
>>> things required to deal with the external data representation.
>>
>> I?d prefer to make the external representation transparent.
>>
>>>
>>> It is key for a core library to make all available data easily usable
>>> for tool writers, not to enforce a particular internal representation.
>>
>> But of course, it will use a particular representation. A core library for C++ isn?t going to be useful to an AWK programmer.
>>
>> One thing that?s nice about our .sch format is that it is easy to read and write from pretty much any language. There?s no need for any extra layer.
>
> Yes but with an extra layer more people can play with files made in
> PCB. Why force them to write and maintain a whole second library to
> handle our file format? Look at the number of utilities that people
> have written to create footprints and things via their own file
> parsing code. That is a lot of duplicated effort that will be broken
> when we revise the format. I know you don't use it but I do and I can
> tell you there are things we need the format to represent that it just
> can not right now.
I think this is a bit more complicated in real life and the truth is
somewhere in between.
It's great to have an official library that supports the file format and
tracks (or even defines) the changes of the file format. It's good if it's
widely available with very low burden.
Now consider not everyone will use the same domain that you do, or the
library does. Like someone may want to use awk to generate footprints
(like I do). If you really picked the lowest possible burden, there is a
good chance I can use your library (e.g. it is possible to bind C
functions to gawk or libmawk, so if you have written your library in
plain C, I could use it. If the library is in python, the burden is
somewhat higher, and if it's written in clipper, well...).
But sometimes it is just not worth it. Like if you need to write software
that spits out an xml with 2 nodes, and you are sure it won't get more
complicated over time, you probably won't use an XML library and won't
build a full DOM and call the readily available function to export the DOM
to XML. It's just cheaper to do a single line printf(), and works equally
good. This does not mean having a library for xml is always bad or
manually doing something with an xml is always good.
The question is where does the line between "use the library" vs. "it's
much simpler with a printf()" sits. And this depends on a lot of factors:
the person who is doing the project, the actual burden of your library,
accessibility of your libary, documentation, etc.
A lot of this is totally subjective. Even the burden part. For example
I have scripts parsing .sch files. It's painful, and I am fully aware of
risking a rewrite if the file format changes. I am also fully aware of the
fact that libgeda exists and it is invented exactly for this kind of
stuff. However, the way libgeda is structured, the API and dependencies,
its coupling with scheme... For me, all these makes it a worse
altenrative than just rewriting the part I need. But how one weigths
these properties of the library vs. the pain of the hand crafted parser
is totally subjective.
So I have to agree with John about one thing: a transparent file format is
good. (Even if I personally find the .sch format very unfriendly.) I don't
think the file format can not or should not change, even if it breaks a
lot of hand written parsers. But I also don't think there is an ultimate
solution to this. And especially don't think it's possible to write a
fit-for-all library that is then used happily by everyone ever after, thus
solving the problem.
I think we should just accept that different things work for different
people, and try to make things versatile so all approaches could work.
Transparent file format _and_ good libraries handling them. Relatively
stable file formats _and_ making changes to them when it becomes
necessary.
- Raw text -