|
To my knowledge, this happens automatically anyway, but in the form of TRIM, whereby unused blocks are cleaned in some way. But that happens automatically, as far as I know.
In the Samsung FAQs, they also specifically recommend against using any kind of defragmentation as that will cause additional writes, which in turn shortens the lifespan of the SSD.
|
|
|
|
|
Good question. Other comments favor NO or just TRIM (which is something else and was an issue when, long time ago, you couldn't yet set things up to do this automatically).
Arguments on the hardware side are pretty convincing, but then still:
- hardware: why is reading/writing the same bytes in different sized packages so much slower for small packages?
- what about the OS having to emit many more diskrequests, switching from user mode?
Can't that slow things down?
- and finally, what about measuring?
My impression is that it does make a difference. So, max once a month, when I believe it is useful, I do a full defrag.
There is (in my case) an argument against in differential backup (diskimage): defrag will cost many more backup bites than without defrag, so much so, after a few backups, that a new complete backup is an option.
So, I will lower my defrag frequency even more to say once in 3 months.
Never say never!
|
|
|
|
|
The part that matters isn't so much performance like defrag can help with a spinning platter.
I think Windows is supposed to approach defrag differently based on the firmware of the drive and its being SSD (it just happens).
The reason it matters to an SSD is because when an SSD gets heavily fragmented, what it can mean is much faster wear and tear on the drive.
This is because instead of being able to stick data in one place in a full page it has to put it multiple different places.... so instead of 1 read/write for that file now, you're doing 5. Not exactly, but pretty much.
I think early on, this, and the fact that they needed to treat SSD different in this regard was not recognized and caused several brands of drives to die well before their expected MTBF.
|
|
|
|
|
jochance wrote: The reason it matters to an SSD is because when an SSD gets heavily fragmented, what it can mean is much faster wear and tear on the drive.
As quite a few links indicate (posted here and when googled) describe what happens for a SSD.
With the older hard disk drives there was a physical spinning platter. Thus 'wear and tear' as the arm switched back and forth over the various tracks.
With an SSD there is no arm to move. Addressing is direct.
|
|
|
|
|
jschell wrote: Thus 'wear and tear' as the arm switched back and forth over the various tracks. I never ever heard of a disk with a worn out arm. (Nor of a loudspeaker with a worn out voice coil - the mechanisms are similar. The speaker has probably made magnitudes more back-and-forth moves.) There is no physical contact between the arm/head and the platter, and no physical wear from long use.
With an SSD there is no arm to move. Addressing is direct. Down to some level. While a magnet can be flipped one way or the other a more or less unlimited number of times, a cell in an SSD is worn out with repeated writes. A good rule-of-thumb used to be 100.000 writes; some people said that was overly optimistic. Technology may have improved, but still all SSDs use some kind of wear leveling: There is a mapping to physical blocks, so that writing is spread evenly on the free blocks. The external address you use when writing a block does not map directly to one physical location on the SSD.
|
|
|
|
|
I think they misunderstand. The wear was more on the bearing of the spinning platter. I think mostly the arm casualties were from bearings that gave and a 1/2 pound of spinning metal being set free in the enclosure.
|
|
|
|
|
trønderen wrote: There is no physical contact between the arm/head and the platter, and no physical wear from long use.
Nor did I claim that.
The arm moves. Thus the mechanical action itself requires...mechanics. Which do in fact wear out.
https://www.reddit.com/r/AskElectronics/comments/18gn8l/hard_drive_arm_failed_any_suggestions_besides/[^]
trønderen wrote: a cell in an SSD is worn out with repeated writes.
Again not something I said.
A traditional hard drive platter can wear out due to read/writes. That is especially true when the technology was first introduced. However accessing the location has nothing to do with that. Not in a SSD and not in the traditional hard drive.
As I stated with the traditional hard drive there is a arm that moves.
In a SSD there is not.
|
|
|
|
|
jschell wrote: trønderen wrote:There is no physical contact between the arm/head and the platter, and no physical wear from long use.
Nor did I claim that. Nor was it my intention to repeat what you said, but to add information.
I have never before heard of a disk arm "wearing out". If this was a problem, disks that have been operating for many, many years should have broken down a long time ago due to arm failure. You don't hear much of that! (*)
Also, the failure of a disk arm is not necessarily a result of mechanical wear. It could be e.g. an electronics failure, dirt entering into the mechanical parts, deformation due to mechanical shock (dropping the disk to the floor etc.) or a number of other reasons.
The person reporting this problem reports how much the disk arm moves back and forth 1/4 of an inch. You cannot see that without opening the disk. That makes me somewhat suspicious. I would never trust a disk that has been opened up by an amateur.
trønderen wrote:a cell in an SSD is worn out with repeated writes.
Again not something I said. Again, I didn't intend to repeat what you said, but to add information.
(*) A disk arm is operated very much like a loudspeaker cone. A history from a while back: One of my study mates had ordered a really powerful audio amplifier, and a set of large speakers. The amp arrived before the speakers, so he tried out the amp with his old, small speakers. This guy was into classical music, and he reported that when playing the "1812 Overture", when the cannons were fired, he had smoke effects from his speakers.
If you could directly control the power applied to your disk arm, you could possibly have it provide similar smoke effects. But you cannot, unless you really set out with determined wish to destroy your disk unit.
|
|
|
|
|
Yes I am aware.
It's not an arm moving causing the wear. It's the natural process which every bit of flash ever is susceptible to and bunches of fragmentation can cause more of it.
If you google hard enough I'm sure you'll find I'm not fibbing to you.
|
|
|
|
|
Other posts provided links but I still have not seen anything authoritative.
Following, still not authoritative, seems to follow what other even less authoritative sources say. And at least the post date is more recent.
https://www.pcmag.com/how-to/how-to-defrag-your-hard-drive-in-windows-10[^]
That link, and others, state that a SSD should not be 'defragged'. But rather it is 'trimmed'. And that is what that process does.
As noted in my other post, my windows 10 computer does NOT have the stated process enabled. I didn't turn it off. And I think I remember installing Windows 10 directly (I remember because I was annoyed that it didn't come installed out of the box.)
But I can't find anything that suggests whether the default is turned on or off.
|
|
|
|
|
No, it's utter nonsense. For at least two reasons.
For one, assuming you are running a recent Windows, NTFS isn't prone to fragmentation anymore in the way FAT used to be in the days of old.
And second, you are just wasting write cycles on that SSD (which are still limited below the lifetime of a "spinning rust" drive) for little gain, if any at all.
|
|
|
|
|
Doesn't Windows itself refuse to defrag a SSD? Try running Windows defrag on SSD and it will simply do some 'trimming', and won't show any fragmentation status. That should be clear answer. A maker of OS should know best.
|
|
|
|
|
Try this: Run defrag c: from an elevated command prompt.
Be patient, it takes quite a while, and you get:
Pre-Optimization Report:
Volume Information:
Volume size = 930.65 GB
Free space = 868.64 GB
Total fragmented space = 20%
Largest free space size = 863.72 GB
Note: File fragments larger than 64MB are not included in the fragmentation statistics.
The operation completed successfully.
Post Defragmentation Report:
Volume Information:
Volume size = 930.65 GB
Free space = 868.64 GB
Total fragmented space = 0%
Largest free space size = 863.75 GB
Note: File fragments larger than 64MB are not included in the fragmentation statistics.
Ok, I have had my coffee, so you can all come out now!
modified 6-Dec-23 7:50am.
|
|
|
|
|
Short answer: No.
Long answer:
Unlike mechanical drives, data blocks aren't stored physically in the same order as they are logically. Blocks are physically fragmented internally in most cases, regardless (including what order the OS believes them to be in). But this doesn't matter, since all blocks are accessed at the same speed (**), eliminating any speed advantage of sequential access (++).
** Hypothetically, a high end drive could read/write multiple physical flash chips simultaneously, allowing a block to be accessed without waiting for a prior one to finish, if stored on a different chip.
++ Some Flash Translation Layer (FTL) structures may have a slight speed advantage from accessing co-located logical blocks (such as unfragmented reads). But the speed improvement would be trivial compared to the speed increase of sequential access of mechanical drives.
While a given manufacture's implementation may vary, the mapping of blocks generally works something like describe on one of these pages:
Overview of SSD Structure and Basic Working Principle(2)
Coding for SSDs – Part 3: Pages, Blocks, and the Flash Translation Layer | Code Capsule
|
|
|
|
|
...downhill!
VS consuming huge amount of memory isn't new (even MS decided to ignore it totally)...
But now I have something new... And it confirmed several times...
I have a solution with around 80 projects in it, only a several loaded at any given time... If I reload a project to change something it will not compile until VS closed and re-opened...
Until that time it will report compilation failed without any actual error, but also without the option to run...
"If builders built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy civilization." ― Gerald Weinberg
|
|
|
|
|
After the last update of VS2022 my colleague reported that debugging with step over and step into didn't work anymore. It was not clear to me if he was talking about C++ or C# debugging, he also uses other debugging tools that might interfere with VS debugging.
|
|
|
|
|
RickZeeland wrote: debugging with step over and step into didn't work anymore. It was not clear to me if he was talking about C++ or C# debugging
Interesting you'd mention that. I installed the latest update last week, and on Thursday/Friday, on multiple occasions, single-stepping (F10) seemed to continue execution or couldn't recover or something like that. I attributed it to me fat-fingering it, but happened enough times that now I see your post, I'm wondering if there's something to it.
In my case that would be C#.
|
|
|
|
|
wow. Not testing much, are you Microsoft.
Charlie Gilley
“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
Has never been more appropriate.
|
|
|
|
|
One has to remember that multi-project solutions don't (always) compile if you haven't checked the proper project(s) in the "Build | Configuration Manager" unless you specifically ask to "Build / Rebuild" that project. (Been there)
On the other hand, when VS is "sleeping", it "seems" to release (more) excess memory. I think they're doing a lot of tinkering.
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
I have a very precise dependency tree, so compiling the main project will compile everything that is outdated - I also mostly do build-solution...
But the main issue is that, there is no error behind the fail and re-opening VS solves the problem - which indicates that VS does no know how to reload a unloaded project correctly... anymore... (which is fixed by re-opening VS and the solution)...
"If builders built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy civilization." ― Gerald Weinberg
|
|
|
|
|
Kornfeld Eliyahu Peter wrote: VS consuming huge amount of memory isn't new
Versus which IDE that uses very little?
Kornfeld Eliyahu Peter wrote: I have a solution with around 80 projects in it
To me that would be an organization problem. I would break it into different solutions and if that was not possible then it would suggest different sort of problem.
|
|
|
|
|
Wordle 897 3/6*
🟩🟨⬛⬛⬛
🟩🟨🟩⬛🟩
🟩🟩🟩🟩🟩
|
|
|
|
|
Wordle 897 3/6
🟩🟩⬜⬜⬜
🟩🟩⬜🟩🟩
🟩🟩🟩🟩🟩
All green 💚.
|
|
|
|
|
Wordle 897 3/6
⬛⬛🟩⬛⬛
⬛🟨🟩⬛🟨
🟩🟩🟩🟩🟩
|
|
|
|
|
⬜⬜🟩⬜⬜
🟨⬜⬜⬜⬜
🟩🟩🟩🟩🟩
In a closed society where everybody's guilty, the only crime is getting caught. In a world of thieves, the only final sin is stupidity. - Hunter S Thompson - RIP
|
|
|
|