With the recent announcement of Ubuntu 11.04 shipping Unity instead of Gnome Shell by default, there has been a lot of backlash. I’m talking about all the knee-jerk reactions such as “I DON’T LIKE THE DIRECTION UBUNTU IS HEADED TIME TO SWITCH TO GENTOO!!!!!!!!!!!!!11”. It’s funny how people can react like this over an Ubuntu release that hasn’t even been released yet. Unity has been made the default interface on the netbook edition, but what needs to be understood is that it is kind of a “1.0 release” sort of thing. In the same way that the first release of KDE 4 was rough around the edges, Unity is rough around the edges. It is definitely not perfect. Even I have some complaints about it, but for the love of sanity, try it for more than ten seconds and make your decision based on actual experience and not nonsense. Also realize that the Unity that will ship with 11.04 will be much different from the current version. The developers are going to get a lot of feedback during this release cycle and will improve upon it.
Transitions can sometimes be a bit uncomfortable. We’ve been though this many times before. As I mentioned before, when KDE 4 was initially released there were many, many complaints. Now all I hear is how nice KDE 4 is. When PulseAudio was released it caused a lot of problems for people. Over time though it has greatly improved and I would even dare to say that it now works for most people most of the time. When Ubuntu first released Notify-OSD, there was a lot of outcry about that but died down after a while. Maybe because people become used to it or *gasp* prefer it to the old notification-daemon. Unity will follow in a similar fashion. Transitions can be bumpy but everything turns out OK in the end.
Some other things to consider:
- Gnome Shell will still be in the repositories and easily installable.
- Gnome 2.x will still be in the repositories.
- Unity is not replacing Gnome 3
Give Unity a chance before writing it off and don’t spread FUD.
The Gnome archive manager, File Roller, is terribly inefficient in the way it handles archive extraction and creation. I just happened to have Htop running while I was extracting a large gziped tar archive which was on my external hard drive with File Roller when I saw what File Roller was actually doing. First off, File Roller is pretty simple in the way it works. It’s just a GUI wrapper for the command line tools like tar, gzip, etc. So here is what File Roller does when you extract something. It copies the whole archive to a temporary directory in ~/.cache/.fr-xxxx. Then it extracts it in the temporary directory and then copies the extracted files to the location you’re extracting them to. ARRRRRGGGHH! 👿 What is this I don’t even
Now if that archive happens to be sitting on an external drive and you’re extracting it to the same directory or to another directory on the same external drive you can see just how terribly inefficient this is. It has to copy the archive to your home directory then extract it, then copy it back to the external drive! This is such a waste of time, it makes it take twice as long since it has to make unnecessary copies to temporary locations. Like I said before, File Roller is simply a GUI wrapper for command line tools which is what makes me even more frustrated with this. Tar supports extracting to alternate locations with the -C option. So why the hell is File Roller making unnecessary, time-wasting copies to temporary locations!? I can accomplish the same thing in less time and more efficiently than File Roller with something like this
tar -xzvf file.tar.gz -C /path/to/destination
Another reason this is so bad is that if you happen to be running low on disk space in your home directory and you try to extract an archive that is on an external drive and the archive is larger than your free space you won’t be able to because it will complain about the lack of disk space even though you may have more than enough free space on the external drive.
Finally, it does something just as pointless when creating archives too. I had a 2.3GB file on my desktop that I wanted to compress with lzma. Guess what File Roller did. Yep, copied the whole 2.3GB file to a temporary directory, compressed it, then copied back to my desktop. 👿
I see no logical reason File Roller needs to be making these unnecessary copies to temporary locations. If anyone has a good reason why it does this please enlighten me.
Related bug https://bugs.launchpad.net/ubuntu/+source/file-roller/+bug/146206
Every time around a new Ubuntu release the topic of upgrade vs. fresh install always comes up. I’ve noticed that there seems to be a general hate towards upgrades. The most common thing I hear is that the upgrade totally breaks your system, it will make you lose all your money, and it will burn your house down. Ok, well maybe not the last two things, but there seems to be a lot of “OMG upgrades are bad!!!!” out there. Now I may be going out on a limb here, but I think a lot of people just repeat what others say about upgrades. I’d be willing to say that a lot of the people that say the upgrade breaks the system have never actually done an upgrade, they just get suckered in by all the other people saying upgrades break your system. And then it just goes in circles.
Now I’m not saying that Ubuntu’s upgrade process has never broken someone’s system, I’m sure it has, more than a few too. Nothing is perfect. What I’m saying is that the upgrade breakage is being blown way out of proportion. Personally, I have upgraded 5 computers multiple times and I have never had a single thing break due to the upgrade. My desktop machine has not seen a reinstall since Ubuntu 7.10 was released. It’s been upgraded three times and it’s still running strong, never had anything break on it due to an upgrade. So either I’ve been extremely lucky or it’s not as bad as everyone makes it out to be. I’m thinking the latter.
Over the past few releases, Xorg has been gradually moving away from xorg.conf. The goal would be to completely get rid of xorg.conf and have everything “just work” through auto detection. However, since auto detection was introduced along with some nifty HAL goodness, I’ve heard a lot of people complaining about it. A lot of people are saying that they actually want to edit their xorg.conf. This strikes me as odd because a while ago it was the exact opposite.
2 years ago, back when I started with Linux, everyone made a huge deal about editing xorg.conf. The big complaint was that dealing with xorg.conf was the biggest thing holding Linux back from making it onto the mainstream desktop. A new user would be turned off if they had to edit a confusing text file to get their graphics working. If Windows doesnt’ need a xorg.conf, we shouldn’t either. And it went on and on.
This is where we are right now. Things should “just work” with little or no configuration, but now people are complaining that they can’t edit their xorg.conf. I thought that’s what everyone wanted. It seems to be a no win situation. 😐