|
Your experience is exactly the sort of thing I was hoping for.
My expectation was that it'd be fine with bunch of drives in a multi-drive enclosure connecting back to the PC over USB-C. It sounded simple enough in my mind...
|
|
|
|
|
There are a couple of things that I think need some clarification. First is that you may be mistaking Windows' willingness/ability to accept whatever drives you want to throw at it as an endorsement that what you're doing is a good idea and will perform well (both speed-wise and data integrity-wise). IMHO, that's an incorrect assumption. TrueNAS will do what you want it to do but it will not endorse it as a good idea (from a data-integrity and performance POV) because it's not. People use TrueNAS for its performance, stability, data-integrity and the UI on top of it which makes it really easy to create a reliable setup. If TrueNAS isn't letting you do something easily, that should be a sign that what you're doing isn't a good idea for a super stable, reliable and performant system. In that light, it's more of a guardrail that is intended to give you pause before hopping over it.
I think Unraid might be more of what you're looking for. One of its strengths and key selling points is that it will take whatever disks you throw at it and add them to your storage. It's also got a nice UI that makes things pretty easy to do. As long as you understand that throwing whatever kind of disks you want into your storage pool without concern for their age, quality, storage capacity, etc. is generally not going to be as reliable from a data-integrity standpoint as what you would get with better drives of matching storage capacity you'll be fine. For many use cases, that's sufficient. As long as you make sure that anything that you absolutely can't lose is backed up you should be good.
The second thing that I think needs some clarification is that TrueNAS (or FreeNAS as it used to be called) isn't Linux. It is based on FreeBSD (a Unix flavor). While both Unix and Linux support the Posix standard, they are separate operating systems with different capabilities, strengths, and weaknesses. BTW, Unraid is based on Linux (specifically the Slackware distro).
The fact that there are a ton of different Linux distros can definitely be overwhelming; it was for me when I first got started. However, I've come to view it more as giving me the ability to evaluate different things and pick the best tool for the job. I'm not stuck with taking a "jack of all trades" approach like Windows often takes. I use both Linux and Windows as daily drivers both on bare metal and VM. Both are stable and performant. It's taken me more time to read and learn about the various Linux distros but it has paid off big time in stability and being able to tailor an optimized solution for whatever computing problem I need to solve.
|
|
|
|
|
That's a great discussion and I thank you for writing down your thoughts.
Couple of things:
greyseal96 wrote: first is that you may be mistaking Windows' willingness/ability to accept whatever drives you want to throw at it as an endorsement that what you're doing is a good idea
Sorry, in case I didn't make it clear, I fully realize that Windows being more permissive and let you go ahead with it doesn't mean any of it is a good idea. I had already come to that conclusion. The distinction I was trying to make is that TrueNAS blocked me altogether. Windows makes no such attempt. But again, making that choice is left to the user (understanding risks and all).
greyseal96 wrote: generally not going to be as reliable from a data-integrity standpoint as what you would get with better drives of matching storage capacity you'll be fine
Where does that leave JBOD systems, I wonder? I didn't invent the acronym, so surely the idea has enough merit that people use such systems.
greyseal96 wrote: TrueNAS (or FreeNAS as it used to be called) isn't Linux. It is based on FreeBSD (a Unix flavor).
Someone else also brought that up, and I hadn't, because my end results would've been the same: TrueNAS Core is based on FreeBSD. TrueNAS Scale is based on Debian. I would've pointed out which I tried, if I had been under the impression the outcome would've been different.
greyseal96 wrote: The fact that there are a ton of different Linux distros can definitely be overwhelming;
I'm not terribly worried about that; one of my part-time hobbies is to hunt down random distribution ISOs and try to get them running in VMs. I'm looking at my collection right now, and the root folder stands at 607GB worth individual ISOs alone.
|
|
|
|
|
Ummm with all due respect... TrueNAS is not based on GNU/Linux. It's based on FreeBSD. Different systems and different kernels, different drivers and so on. So even if it is an open source project... it's not a GNU/Linux distro. Therefore GNU/Linux did not disappoint you yet.
|
|
|
|
|
Well if you wanna get nitpicky, TrueNAS Core is based on FreeBSD, while TrueNAS Scale is based on Debian.
I didn't bring these up because that particular point was entirely irrelevant to the discussion at hand. Are you under the impression the results I got would've been different had I been using one vs the other? If not, then again, it wouldn't have changed the discussion.
|
|
|
|
|
Debian does have older packages especially Debian Stable. FreeBSD I'm not sure because I've never used it so I assume perhaps there are differences between packages maintained by a distro vs getting the latest upstream code and just compiling it and packaging it.
Again I don't know how FreeBSD uses upstream code or what are their patching and maintaining policies so this is just an assumption and I could be wrong. If you're using TrueNAS Scale then you're correct. If you're just using TrueNAS Core then there is a possibility that my statement holds true.
Not trying to be nitpicky or difficult just making observations that might have some bearing in the discussion at hand.
I also think that perhaps you might want to look for alternative ways of making this work for you. I don't know how TrueNAS (Core Scale or any other branch of it) handles USB External storage devices but you could make a folder and subfolders and mount each of your USB Storage Devices in those subfolders. It's not as elegant or polished as the Windows approach but it's just as functional.
|
|
|
|
|
I am not sure "who is on first" AKA to whom to blame, so this is probably not Linux issue...but
here it comes...
I like to use "gparted:" but it appears to have TIMING issue with multiple partitions and anything ( disk ) large that 100 GB...
Then it keeps "scanning" , (usually) ALL devices , after minor change is made to one device....
As I am saying - it is hard to "blame" ( stupid ) behavior...
PS ...and it does not do "mount"...
|
|
|
|
|
Well, just look at the bright side. Linux prevented you from doing a silly thing. A really silly one.
Using old, used drives in a RAID, any form of RAID is simply a bad idea. It's data loss waiting to happen. Even on Windows, as you have no way to recover any data from the drives if something goes south with that array.
As far as Linux on real hardware goes, I never had a problem with video. My Linux development box is a Dell laptop with a 1600x900 display and that works just perfectly fine in Linux Mint 21.2. Some times temporarily hooking up a second, external monitor no problem either.
The only problem I really had in the last decade or so was with some Intel WiFi NIC drivers, mainly when I tried to use Fedora a few years back. But again, under Mint, no problem at all.
|
|
|
|
|
Well obviously RAID isn't a backup. I'm not too worried about losing anything on that RAID setup.
I already have 2 separate sets of backups of my main data set. All I wanted to do with this is create an extra backup set, by using drives I've retired. These drives have less than a few dozens hours on them - I used them to do my previous backups, but have outgrown them as my data set has increased in size. They've been powered on only when the actual backups were taking place, which occurred anywhere between once a week and once a month.
|
|
|
|
|
I worked as a dev on Windows for 20 years. I found it was pretty good by Windows 7 and Windows 10.
3 years ago, I switched to Linux Mint. There was a bit of a learning curve, but now, I couldn't be happier! I'm running multi-monitor dev workstation. My productivity is though the roof. Since then I installed Linux on 50 different machines with 0 issues (some servers with RAID too). I only have 1 Windows machine left, which I am about to decommission.
Then this year I got a new contract, I have to work with my customer's Surface Windows 10 computer. I'm not going to say what I REALLY think about it, but: It's full of bugs, looks like crap, multi-monitory only half work, and updates...
I was a Windows users, but now: Windows, why do you keep disappointing me?
The point is, there is a learning curve, but for most things I believe Linux has surpassed Windows now.
Christian Lavigne
|
|
|
|
|
Saw an article this morning recommending that you run the command: "Defrag C:" from time to time on your SSD drives. But does it make sense to defrag a SSD? I can understand that it is of value on old spinning disk hard drives, where fragmentation can cause the reader to physically jump from fragment to fragment, but a SSD has no moving parts.
What do the experts say?
Ok, I have had my coffee, so you can all come out now!
modified 3-Dec-23 8:47am.
|
|
|
|
|
No - all it will do is 'burning' write cycles, which shortens the life of the drive
"If builders built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy civilization." ― Gerald Weinberg
|
|
|
|
|
This is a well-known and probably correct argument.
On the other hand, the question I have is: If SSDs use something like DMA, could a certain kind of defragmentation increase throughput?
|
|
|
|
|
Agreed.
Steve Gibson (author of Spin-Rite) has discussed this numerous times on his Security Now podcast, and it makes zero sense to "defrag" an SSD.
Some people have called him a quack, and I originally sided with them (somewhat), but after listening to his podcast for nearly a decade, it's clear he's technical to an extreme and very knowledgeable. When he does a deep dive into some technical matter, I think he always makes a lot of sense. He's not clickbait-y and doesn't make outrageous claims.
Not that I had any doubt, when it comes to defragging an SSD. But his explanation for it (I don't have a show number for it, sorry) just sealed the deal for me.
modified 3-Dec-23 10:35am.
|
|
|
|
|
dandy72 wrote: Some people have called him a quack
dandy72 wrote: it's clear he's technical to an extreme and very knowledgeable
You can be both you know.
|
|
|
|
|
Fair point. But I've been listening to his podcast for over a decade, and I have come to the conclusion that those who called him a quack were just poorly informed.
I forget what his exact concern was (something about XP's default network configuration?), but in the end he was proven right and Microsoft eventually had to seriously lock it down with SP2, which introduced (for the first time) the Windows firewall.
|
|
|
|
|
I remember him alright.
He isn't a quack, but has a tendency to fight windmills.
|
|
|
|
|
Jörgen Andersson wrote: fight windmills.
I had never heard of that one. That's a cute variation on "tempest in a teapot". I like it. Probably because it applies exactly. LMAO.
"Fighting windmills" is probably how I thought of him at the time I sided against him on some of his old claims. One thing I'll say for him, is that he's got honest beliefs. He believes in what he claims, and doesn't try to BS anyone. Which doesn't mean he can't ever be wrong.
|
|
|
|
|
Does a "defrag" use less space? Are there fewer "pointers" to follow? How much can you "save" in extreme cases? Is space a concern on a "maxed out" SSD?
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
Gerry Schmitz wrote: Is space a concern on a "maxed out" SSD?
Is space not a concern on any maxed out drive, no matter what the underlying technology might be?
|
|
|
|
|
Defragging an SSD makes no sense, but trimming does.
SSD TRIM is an ATA command that enables an operating system to inform an SSD drive which data blocks it can erase because they are no longer in use. The use of TRIM can improve the performance of writing data to SSDs and contribute to longer SSD life. This is an expensive operation, which is why it isn't performed after every time a block is released.
See this explanation by Kingston Technology, a RAM and SSD drive manufacturer: The Importance of Garbage Collection and TRIM Processes for SSD Performance
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
|
I respectfully disagree. Windows recognizes that your drive is an SSD and doesn't do it. Instead, there are other optimizations that Windows does to SSDs that are good to keep it working well. From my reading and understanding over the years, defragmenting, however, does nothing at all to an SSD drive except needlessly burn read/write cycles. If you know of information supporting your viewpoint, I'd love to read about it.
|
|
|
|
|
Keefer S wrote: If you know of information supporting your viewpoint, I'd love to read about it.
You read the link?
|
|
|
|
|
Interesting read and explains why Rasco's Perfect Disk uses a consolidate free space algorithm by default for SSDs.
|
|
|
|