cp -u <url> not working
Project: | JNode Shell |
Component: | Code |
Category: | feature request |
Priority: | normal |
Assigned: | Unassigned |
Status: | postponed |
Jump to:
Description
The cp command should accept URLs as sources of the copy operation with the -u option. This doesn't work. More than one URLs should be supported.
- Login to post comments
#1
#2
We'd need to think up a command syntax that is going to work in all cases that we want to support. I'll take a look at this.
If this doesn't work, a better bet (IMO) would be to create a separate command (say "ucp") that works with URLs instead of file names. Another option might be a virtual file system based on URL resolution; e.g. "/url/http/hostname:port/some/path" or "/url/file/jnode/home/..."
#3
I commit version of a basic wget command. Syntax is :
wget url
It's really simple for the moment but if it fit our need, i can improve it.
#4
I strongly support galatnm's approach. Wget and wput are a much better idea than "polluting" a whole bunch of commands with options to make them work with remote files.
#5
Is it pollution because unix doesn't do it or why?
It works for cat, what's the problem with cp?
wget is a sepearte program which is fine to have. It also used to do much more than cp and work quite differently.
In these networked times I support URLs in shell commands where they make sense.
#6
Both of you have right. Perhaps we can implement command like in unix world to have a base set of commands and improve them when it make sense.
My 2 cents.
#7
It is wrong to add -u to cat and cp because there is no end to it. Next thing someone will want a -u option on dir, leed, jar, javac, and on and on.
Then they will want a -j (to read and write jar files), a -c (to read or write compressed files), and so on. Before you know it, every new JNode command needs to implement dozens of these options, and they'll be big and complicated (bloated), they'll have inconsistent options and subtly different behaviors, and users will tear their hair out.
Take a look at the 'cat' command and the extra code that was required to implement "cat -u". Now extrapolate.
There is a term for this - it is called "creeping featurism".
IMO, it is better to design each command do a specific task well rather than try to make it do a wide range of tasks and have it do them poorly. Complicated tasks are best handled by using shell scripts, (UNIX-style) aliases and so on to glue simple commands together.
And if you want many commands to be able to read and write (for example) files identified by URLs, a better solution is to map the URL space into the JNode filesystem namespace. The idea is to allow all commands to work naturally with both local and remote files/directories (and more) WITHOUT having to code this into each and every application.
(A more radical approach is to design and implement a new JNode-specific abstraction that combines the functionality of URL/URI/File and offers the full range of APIs including opening streams, creating and navigating directories, examining metadata and so on. At the command line level, you'd need a new argument syntax and corresponding class. The obvious problem with this approach is that it ties applications to the JNode platform.)
I'm not saying we should do these things. I'm simply offering them as (IMO) better alternatives to feature creep via "-u" and its likely successors.
#8
Your reasoning not supporting "-u" seems absolut plausible for me. But despite that I liked the idea to use common commands with URL/URIs too. In this regard KIO-slaves came to my mind (and I guess there's similar for gnome). I love those and I often use them on my local PC. E.g. if you type fish://user@host/ or sftp://user@host/ in the address line of konqueror you'll get connected to a ssh/sftp server. Most KDE applications have support for that and for many applications you can simply enter such kio URIs in any open dialog.
Perhaps we could do something similar in JNode that even works on the command line. E.g. the syntax either expects a File or an URI, and if the URI starts with a "/" it is the same as "file://...", all others have the protocoll specified right before the ":" anyway (for File-syntaxes you'd be only able to supply files).
#9
URLs are more genric than files. Every file can be converted to an URL. Is a Java best practice generally to use URLs instead of files because your program becomes much more portanble and flexible.
In Java URLs are more important than files. This can be seen from the way java.io.File and java.net.URL is used in the whole system. You will find that URLs are more linked into the critical parts of the system than files.
I think some kind of first-class support for URLs would be useful in JNode. I'm not sure if a new argument type wich merges files and URLs is the solution. One problem is that for files we must implement the wildcards which are not working with URLs. So in the first step it should be identified if the arg is a file or and url and acted accordingly. This could be done with wither the URL constructor with intercepting the related exception or with aregex matcher. The most simple solution could be the -u option requring a single URL argument. This would make parinsg less complex. To support multiple URLs we could sepcifly the -u option more than one times. I'm not sure if this is possible right now in the syntax framework but we could specify that the -u parameter is repeatable and the urls could be retriven in an array. In a similar way as you specify String... as the last arg of a method and inside the method you refer to that arguments as an array of Strings. One question is how to preserve the order of args if files and urls are intermized in teh argumant list. This would be needed for a correct implementation of the cat command.
You said what if we extrapolate to the other comands or users would request all kind of wild usecases. The answer is that we should define the area of our URL support and stick to it. The area could be URLs with read only support. In this case cat and cp make sense. We will not supoprt writing URLs. Adding URL support for individual commands will be up to the command implementor. The point is that if there is URL support that should be consistent with our approach.
The examples with jar and zip support for commands is very far fetched and it has nothing to do with the topic. On the other hand if URL support is in place we could actually handle those case by the means of URLs in the following way. We register an URL stream handler for jars and read individual files from a jar like this: cat jar:///jnode/tmp/test.jar!/META-INF/Manifest.ml or similar syntax that basically depends on us. I hope this is a convincing example of the power of urls. A similar solution exists in JNode already for accessing the contents of plugins. Now extrapolate for other archive type and to other things beyond archives. In the same way with sepcialized protovl handlers we could get info about devices in the system or other system resources.
Using a special filesystem for URLs doesn't make sense because URLs are more generic than filesystems. The dir command is impossible to operate meaningfully in that file system in a directory like: /http/www/ . Now what would a file system be like where dir is not working? Instead the functions provided by the jifs file system could be replaced with URLs very well.
So at this point the question is not whether we should support URLs or not, the question is how to support URLs. The example of the cat command is not very good because it's not well written. It can only handle either files or urls, which is a sad limitation, it is badly written with duplication of code and it should preserve the order of arguments even with mixed file url and argument sets for correct operation. For avoiding duplication of code one answer is finding the common denominator of the input objects. This could be either the URL where files would be converted to urls or the InputStream. Also the question of generic code versus performant code arises and often performance could have a priority where we avoid converting files to URLs.
So I think we better focus on a good solution for the problem of URL support rather than engaging into endless arguments about the need for URLs.
#10
I am happy for you to come up with a better alternative to adding "-u" options to any command that currently takes file arguments
BTW, I never said (or thought) that we shouldn't support URLs or something at least equivalent to them. My argument has been about how we should support them. And "cat -u" is a good example of the wrong way to support them, both from a UI design and an implementation perspective.
You said: Using a special filesystem for URLs doesn't make sense because URLs are more generic than filesystems. The dir command is impossible to operate meaningfully in that file system in a directory like: /http/www/ . Now what would a file system be like where dir is not working?
A FS-like mapping of URL "xhttp://www.jnode.org/nodes/911" should look something like "/xhttp/www.jnode.org/nodes/911". Mapping it to "/xhttp/www/jnode/org/nodes/911" is broken because it breaks the model of a hierarchical namespace. And "/xhttp/org/jnode/www/nodes/911" is almost as bad because the FS mapping layer has no idea where the URL hostname ends and the URL pathname starts.
Given that correction, your question is easy to answer. The "xhttp" directory is analogous to a UNIX directory without 'execute' permission. If you know a name (hostname) in it you can do a lookup. If you don't, you cannot list the names (hostnames). So "dir /xhttp/" would say "not listable" and TAB completion of "dir /xhttp/www." would offer no completions.
But you before you jump in and say that these limitations mean that the FS mapping "doesn't work", you would have the same problems with "cp -u xhttp://www.*/ somedir" or with TAB completion of "cp -u http://www.". These limitations are a consequence of a fundamental property of the internet: the DNS namespace is not listable. We have to live these limitations if we are going to make URLs or their equivalent "first class".
IMO, the file system mapping approach has a some important things going for it:
Returning to your point that URLs are "more generic that file systems", I respectfully disagree. If you ignore the 'query' part, URIs and generalised file pathnames are pretty much equivalent in what they can express. And the idea of mapping/splicing other namespaces into the file system namespace is by no means new. I remember an example from 20+ years ago.
#11
Actually, there's a another reason why turning everything into URLs does not solve the problem. The java.io.File API provides methods like 'exists', 'isDirectory', 'mkdir', 'rename', 'delete' and so on, but the java.net.URL and URI APIs do not. Commands that depend on being able to do these things simply could not be implemented using java.net.URL/URI.
This means we'd need a new JNode URL/URI API, and replacements for the Java classes that depend on the existing URL/URI classes, and so on. And of course, any code that we wrote that depended on these new APIs would be JNode specific, or at least dependent on JNode compatibility libraries. This does not sound like a good idea to me.
#12
After thinking about it, there's no way to implement "cp -u" to work for much of "cp"'s functionality. For instance, there is no way that the command could create directories to perform a recursive copy, and I don't even think it could test to see if the last argument is directory to implement "cp x y zdir".
I'm marking this as 'postponed' and deassigning it. (And it is NOT critical by any stretch of the imagination!!)