Granted, threadripper is only quad channel (like your XE choice). Epyc will go full 8 channel. Both AMD chips will do ECC memory if that is important.
If you are debating between 128G and 256G of RAM, I suspect that PCIe4 NVMe might actually help you (it seems unimportant in non-workstation builds). Only available in AMD motherboards right now
With only 8 times the storage as RAM, it looks odd. If you really don't care about all the intermediate values, that isn't a problem. Otherwise Best Buy typically has huge WD external drives on sale during major holidays.
If you are using Windows, you might have to check licenses and how the different tiers handle large amounts of cores. You might need a pro or server edition (I think 256GB memory will trigger this as well, probably even 128G). If you are using Linux, no problem.
PS: The rules clearly state that after completing a monster build like this, you have to post Crysis benchmarks. Them's the rules.
Another useful thing would be to lower power usage to extend notebook battery life. You'd probably have to get into the nitty-gritty of changing voltage offsets instead of manually setting the voltage (simply setting the voltage low would be the worst of all worlds), but it might make sense.
Looks like more than enough. Wattage estimate is at 320W, and that assumes that the motherboard is pumping out enough for a 3900X. I wouldn't go down much more, but 650W is plenty.
The CPU caps your fps if and only if you are willing to turn down resolution and/or graphics quality until capped by your CPU. There are typically few settings you can change to lower the load on your CPU.
At that point you have to ask yourself if you are willing to lower your graphics standards just to go from 100Hz to 140Hz.
I'm going to suspect the Intel chip has the edge with 6 cores, although the 3400g might make it up with higher frequency and more threads. This assumes similar costs for motherboards (this appears to be the case, but I've been on the AMD train so long I'm not familiar with Intel motherboards).
The problem with the 3600 is that you'd "have" to add a video card. Any video card should do (as long as it doesn't draw too much power, and is from >2005 or so), so any old card you or a friend have lying around will work.
I'd recommend a 2800 instead: as you increase the amount of games the thing is running, the 8 cores will win over the faster 6. I think the 2800 is still cheaper (sometimes the 3600 is), and also look at the 1800 (but only if it is cheaper than the 3600 and you have a video card).
A quick bit of googling leads me to believe it doesn't. This may be because it is present but not supported (and thus not listed as an official feature) like the ryzens (but with even less press gets forgotten) or because it simply doesn't have it (which would be weird because the pins have to be there thanks to using the same socket as Ryzen, but maybe the additional GPU circuitry needed more ground pins - who knows?).
ECC is available on non-APU Ryzen processors but not enabled nor tested. The key issue is you will need a motherboard that supports ECC (most don't).
The elephant in the room is that since ECC isn't natively supported, it is very hard to tell once you've enabled it (for non-APU ryzens at least). You might try using kapton tape over a data line on a DIMM (good luck getting it to stay, but once it does the experiment should be immediately conclusive) or running memtest86 for a few months (not sure the time it takes non-ECC RAM to generate an error).
PS: Intel silently removed ECC from all [new] i3s.
I'd at least check what you can get out of a x570. 3 M.2 PCIe4 slots might blow the budget, though (I'd only get one unless you are RAIDing them).
I know you said "10900k", but there is only one on Amazon and newegg is out of stock...
ECC is a pretty standard algorithm for these critters (Reed-Solomon), and they have to use the same thing to remain compatible.
If you are familiar with Linux, safecopy is a great program (console only, it might not be worthwhile if you aren't comfortable with Linux).
Also look for a B550 motherboard, although the "just released tax" might make it a bad deal. This should give you a tiny upgrade path (to zen3 and PCIe4), but seeing that your FX processor is far older than two generations it probably won't make a difference.
Well 12 cores are likely to generate a lot of heat (although with only 6 cores per chip that should help out a bit), so I doubt watercooling would be wasted.
But the games listed all seem things that a 980 should be able to handle without any problems. Obviously a 1080@60Hz monitor isn't going to see any difference between a 980 and most others, but I strongly suspect that your TV really can't sync faster than 60Hz, they often advertise a "motion interpolation" but don't expect anything better than 60Hz off a TV (at least until the next generation of consoles come out).
VR will certainly want a 90Hz (and often going higher) framerate, and with a fairly hefty resolution. But if you don't have the headset yet, I suspect that waiting a few more months would make more sense.
You don't have a build listed on your profile, so we really can't help you without a lot more information
On watercooling: this is more an Intel thing, especially as they keep cranking up the clock to try to compete with AMD. AMD users only choose AIO watercooling if the price of air cooling is too high (and the included cooler is often "good enough").
Don't forget that watercooling often makes a "glugging" sound as it starts (and sometimes ongoing), don't assume that it will always reduce noise.
2080super vs. a 980. Tough call. I really don't expect a new nvidia in 2 months, but they love to pull surprises (and they are announcing something May 14, but expect it to be a compute card for servers. Presumably game cards will come later and be based on that). AMD has been pushing their releases out to September, and might be important even if you don't want AMD drivers if they can release a card faster than a 2080super (and thus push down prices). Expect Covid19 to delay everything, but the 2080 is last year's card and next years card should be coming sooner or later.
a 2080super should be a huge step up, but what are you playing/running? Adding another 50 fps to Fortnight sounds great, but since it is already running 100 fps you might not see much. Can your monitor support the higher framerate/resolution of the 2080, or is it a matter of cranking up the sliders?
You should be able to put 3 more SATA drives (SSD or HDD) on that motherboard. Just make sure you don't plug them into SATA5 or SATA6 (no real danger, but they won't work and might disable your NVMe until you unplug it).
You will also have a x16 PCIe slot that should fit a 2 or 4 slot m.2 expansion card (I don't think I've seen a 4 slot card, and don't expect to buy something to use 8 PCIe3 lanes to fill 4 PCIe4 lanes).
First, be aware that there are two wildly different models of "GT1030" at a nearly identical price: one has DDR4 and one has GDDR5. The GDDR5 is significantly faster. It looks like pcpartpicker separates them into "GT1030" and "GT1030 DDR4", but make sure it has GDDR5 before buying.
Second, while Gilroar is correct that there are a few considerations to be had before claiming that "vega 8 == GT1030", if you are starting from scratch it is hard to argue paying ~$100 for an additional board vs. getting an AMD ALU, motherboard, and DRAM (go with at least 16G as noted by Gilroar).
If you already have the CPU, motherboard, and RAM (and only want a GPU), then the main advantage of the GT1030 is the low power draw. AMD might be able to outperform the GT1030 at a similar price, but it will take 2-3 times the Watts (Nvidia also makes other boards at a similar price with a lot more performance but also use more power).
I'd just hook it up. As long as the bios (whatever it is called now) knows to boot the SSD, it shouldn't matter if you have a boot partition on the HDD.
This of course assumes you want the data that is on the HDD. You could format it if you wanted to, but I suspect any filesystem on a hard drive worth keeping is sufficiently modern to use.
Reading should be ok, but writing suddenly becomes slow. But reading any HDD is likely already pretty slow for gaming, depending on what needs to load between events.
If you are looking for 2T (and don't need more room... for HDDs I normally recommend going straight to 4T, but I haven't looked too hard at who uses SMR and who doesn't) I'd look at a 2TB firecuda. $30 more, but it has built in SSD caching (important for notebooks).
Personally I keep most of my Steam library on an old AMD/emotus setup (2TB HDD/256G SSD), but they "stealth dropped" that in March.
Mostly a belief that since a power supply exists to supply Watts, the more Watts the better.
There's also the issue that overclocking hardware requires more watts than officially required. Most power supplies are most efficient at 50% load (typical) and that 80+ gold spec typically refers to 80% efficiency at 80% load (no guarantee at loads higher than 80% or lower than 20%). Ryzen "x" chips typically work as well or better without overclocking, and it looks like a 1080 Super shouldn't pull more than 100 more Watts when overclocked (which is way too much, considering the same review only saw a 7% gain from a 30% heat/power increase). That will still keep you within ~600W (80% of 750) even if everything else is pulling maximum power (it won't).
To be honest, I suspect that even heavy video editing will not be much higher than 150W (20% of 750), but you need all those watts for the worst case, not the typical. 750W looks like a good choice for this machine (12 cores and a GPU a hair below the top of the line is a lot of computer to power).
If you are using Intel, I'd stick with whatever is built into the CPU. A ryzen more or less forces you to add a video card.
There is no reason to spend significant money on such a thing as you probably aren't going to use the "3d" bits (KDE Plasma might use some, I turned off the special effects long ago), and if you are anything should get the job done. I've used Linux since before 2000 and never really had any issue with graphics (unless attempting to game. That almost always ends with rebooting to windows).
You didn't specify the resolution and framerate of your monitor (nor your CPU, and any built in graphics). That's always critical, as simply throwing in "the cheapest available" might not work (my old video card won't display 4k, although it was great at 1080P).
Some googling said that AMD drivers were a bit better supported than nvidia, this might have something to do with Linus's personal spat with nvidia a ways back.
cheapest "current" card. If you go with a gt1030, make sure you get one with GDDR5. Even if the performance probably doesn't matter, you get about 50% more of it with GDDR5 for roughly the same price. Linked card claims GDDR5.
https://pcpartpicker.com/product/GLTrxr/sapphire-radeon-rx-550-4gb-pulse-video-card-11268-01-20g Cheap roughly current AMD card.
https://pcpartpicker.com/product/LBhmP6/gigabyte-radeon-rx-570-4-gb-gaming-video-card-gv-rx570gaming-4gd-rev20 a "real" gpu, almost certainly more than you need but will probably be supported longer than the others here. I had the 560 here, then noticed the 570 was less than $5 more (and twice as powerful). WARNING: this thing will draw 150W (at full load, assuming you ever use it). Make sure your power supply is up to it. The 560 will draw at most 75W.
The 1650s is closer to the 580 8G, which is roughly the same (Overwatch is barely faster). But you don't have to worry about current AMD drivers with the 1650s.
Oddly enough, 580 4G is priced much higher than the 8G model according to pcpartpicker.com. Presumably nobody bothered to make them.
The only big difference is the ability to transfer large amounts of data quickly, it doesn't effect the time to start transferring (latency). Music really doesn't involve a lot of data (.2GB per track per hour?, possibly more if you wanted floats or other higher precision formats. But not much higher), so any difference will be over before you start it.
I'd at least be a bit careful to make sure your PCIe3.0 card has enough [pseudo]SLC cache so it doesn't lag, but I doubt the bus will be an issue. Unfortunately the "old" rocket is nearly as expensive as the new one, and I'm not that familiar with the others (I know the Inland Pro initially shipped with the same controller, but don't know if they stayed with it as the price rose on the Rocket).
It is listed at ~400W. The CPU is throttled down to 65W, and presumably when overclocked can hit ~100W. I doubt the 2070 super can do all that much more, but will likely run close to 200W while doing any gaming. Having a powersupply that is rated for twice the power you are using is likely ideal, although I might want to be closer to 550W if you spend more time websurfing and other low-power useages (you want your powersupply rated no higher than 5 times your typical power consumption).
I'd look at this one:
(don't know anything about EVGA BR power supplies, other than the OP originally chose one).
Yes, these were mostly chosen on price. But the corsair at least is a solid brand that should supply that power for that price and EVGA is a well known company and the OP chose that particular line.
Some Microsoft software is particularly picky about loading on the C:\ drive. I used to have a 64GB Windows OS drive and simply couldn't install Visual Studio (regardless of how much of the stuff I tried to offload on D:\ drive). But a 1TB is more than enough.
I'm pretty sure you can move the "User" directory (My Documents, My Pictures, etc.), but there will always be other things that Microsoft insists on stuffing on your boot drive.
Just don't rely any complicated simulations that take up all your data, or perhaps run them twice.
Also it seems to be rather specific to a single line/pin/chip in your ram. All errors appear to be in the second bit (the 2s) in the most significant byte (compare expected against actual). If you want to see if it is just memtest86 breaking on Ryzen, swap you memories and see if the errors move (presumably you would have an expected 00000000 and actual 00002000).
No idea (and I've bought one I was pretty sure was already used). But I've never heard anything about them fixing their supplier problem.
Most of the problems seemed to center around Hitachi drives. But be careful about "fulfilled by Amazon" as who knows which bin that comes from.
There are also scare stories about packaging, but that might be just due to how many Amazon ships and the few failures will always slip through (and be the first to be complained about).
I'd recommend paying the 3 extra bucks or so and buying from newegg, Amazon has a notorious history of grabbing from just any supplier's hard drives, many of which are used. The Seagate is also an option, depending on how you feel about Seagate reliability.
Any hard drive is going to be simply "slow". Have any data you want quickly on the SSD, and let the stuff that is coming over slowly and in bulk be on the HDD. A 7200rpm drive shouldn't matter that much for transferring specific files, but will always be 40% faster in finding the start of each file. In the end, I'd go with the cheaper HDD and if you want more speed then buy a bigger SSD (although with a 4:1 ratio of HDD:SDD, this shouldn't be needed).
Contrawise, go with 1080p if you're a fps w-hore
On the other hand, don't count on getting much more than 60Hz at 1440p
I'd also go for a slightly larger monitor for 1440. I didn't have any issue seeing the pixels in 1080p with a 27" monitor without AA, but turn it on and you'd have to look for them.
Overall it sounds like a side-grade (trading fps for resolution). I wouldn't upgrade the monitor until you upgraded your GPU (which might take a few generations, although I suspect that AMD should at least bring competition to the x060 level at 7nm(+).)
If a red drive has trouble reading data, it will simply return that it can't read that data. A normal drive will try multiple times to recover the data, but reds are designed to work in teams where the data can be recovered by other drives and you don't want a single drive wasting time trying to recover from a failure. So you are risking your data to a degree.
That said, you can set reds to act "not like a red", but I'm not sure how easy it is.
Intel released the Pentium 4 Prescott (90nm) and Cedar Mill (65nm) in 2004 and 2006 respectively. While the highest they were officially clocked was 3.8Ghz, I'm sure you could get them to 4.2GHz without much issue (some got past post at 8GHz with LN2).
Don't expect better performance than a modern atom or similar bottom end performers. They had enough trouble competing with the 2004/2006 Athlon64s (even after spotting them 1-2 GHz).
"Performance is measured in MHz (now GHz)" was a Dave Barry joke in the 1990s. Readers weren't expected to know why this wasn't quite true, but they new the rule was broken often enough to be funny.
Exactly which processors contain cores that won't operate better by disabling multiple cores on the same chip? Oddly enough, some Ryzen and Epyc might show the least improvement thanks to the way the internal bus works, but pretty much every processor will improve single core use by disabling cores. That shows nothing.
A digital signal is a digital signal, all the bits come out the same. The only real issue is that you are sending enough bits for the resolution and refresh rate of the monitor.
You'll probably want either displayport or HDMI 2.0 (or better) to get 60Hz out of your monitor, I know mine defaults to 30Hz until I change the HDMI setting to 2.0. DP shouldn't have this issue, and I suspect you can avoid DVI (I really haven't followed DVI for years, I would be surprised if it handled 4k@60Hz or better).
Back in the VGA era, you might have cared about RAMDAC quality (and thus some boards were better. I think this was one of the advantages of 3dFX over nvidia). But nobody wants an analog signal anymore.
Oddly enough, the IBM power systems go the other way and allow 4 and even 8 threads per "core". The "cores" have similar extra resources to AMD "dual cores", but they prefer to call them single cores. Certainly the software houses that charge "per core" would have similar arguments that IBM "cores" aren't individual.
As far as I know, the original usage of the word "core" in electronic chip design referred to a reusable entity, typically the smallest you could cut it up. By this usage, the whole "4 core" CCX unit of a ryzen would qualify as a "core" (and the individual "cores" wouldn't). Trying to pin a legal meaning to a marketing term is pretty silly, and the engineering term often wasn't all that similar to the marketing term.
A hypothetical 3080 should use less power (presumably mostly thanks to a 7nm+ process) and probably a slightly lower price (especially once a 3080ti takes the "bragging tax" away from the 3080).
The only way you can expect significant savings is if AMD releases a card in that performance range. So far they haven't, but supposedly they are only halfway done in moving from Vega to RDNA(2/+/whatevers). Otherwise nvidia might as well pocket any savings thanks to 7nm+ (they certainly appeared to do so with Turing performance).
PSUs going kaboom? Mostly ones from Alibaba, especially for prices that can't possibly be 500W (remarking PSUs with arbitrary labels is an old trick, long predating Alibaba and even Ebay).
I'd stay with the big boys. It is fairly easy to produce a power supply, especially if you are cutting corners.
PS. TheShadowGuy has a lot of great information. One other tidbit of info is that ideally the "typical" power level (probably pretty close to idle, unless all you do is fps gaming) should be close to half of the rated power level. Of course, with a high power GPU this might be impossible (the thing can draw more power than the PSU is rated for), but it should hit peak efficiency. I'd at least try to keep idle draw over 20% of rated capacity.
Black is barely faster, but if you are replacing an external drive that means you really don't care about speed (presumably doing real work on a SSD and then storing the final result on the HDD).
Go with what's cheaper. Reliability should be similar, and they are essentially the same speed (way slower than SSD).
I really doubt that the HDMI connector of a TV can handle 120Hz. If it can, it would probably only do 1080 (and nobody is bothering with a high end HDMI connector on a 1080 TV). The catch is that nothing but computers produce 120Hz, and I doubt the next generation of consoles will bother trying. The "120Hz" TVs likely are probably just interpolating frames and likely blending them with a 60Hz output (this was a real problem for people looking for older "60Hz" TVs to use as monitors. But "real" 60Hz is now readily available). I wouldn't look to TVs for 120Hz (unless PC gamers sufficiently influence videophiles. But I suspect that PC gamers are the new audio/videophiles. And they aren't that big a market).
A TV for 4k@60Hz is a great idea, and I've listed my experiences here: https://pcpartpicker.com/forums/topic/326815-using-a-tv-as-a-monitor-43-4k-60hz-200
PCPartpicker is using a 95W "TDP" rating although the 9900K will (briefly) demand 160W, so I'd stay away from 500W.
I'd probably want to spring for something like this
even though I'm the last to want to overspend on power supplies. On the other hand, the minimum draw (which is what you will likely be pulling while websurfing) is 104W. That will be 20% of the 500W's draw and probably be somewhat efficient. Don't count on the 750W supply being any good at that range, but will be well in the 80% range during gaming. Don't skimp on your power supply.
Bragging rights for perhaps a month or two. Depends how quickly after March nvidia can unleash a 7nm 2080ti replacement.
Sic transit gloria
if you're the programmer, you get to decide.
That said, GPU programming is a different beast altogether. You probably need to know C and also enough assembler to know what C is "really doing". Also having a Nvidia card means you can code in CUDA, which is much more popular than straight OpenCL (which is what AMD is stuck with).
If you can break your code down far enough to use a few thousand threads (that don't need each other's data: GPU memory models are horrible), you can vastly speed up code by using the GPU. Most programming (other than video, machine learning, audio, and possibly some physics simulations) really doesn't work well on a GPU, but it is still up to the programmer.
The RAM looks like overkill, but I'd ask CS students for more up to date responses.
 yeah right. I'd assume from your question that your next several programs will have the language (and thus whether CPU/GPU) chosen by the prof/teacher/TA. In the real world it also gets chosen by the customer/corporate policy/lead programmer (and the lead programmer had better not choose something CUDA or LISP-based no matter how well it fits if he wants to find employees to replace any employees who leave halfway through the project). But for independent work you may well look into CUDA programming.
You might want a higher resolution and/or higher frequency monitor. That GPU will likely be limited (unless doing raytracing) at 1080@144. And I really doubt your eyes will see anything higher than 144Hz anyway, so at least look at 1440 or a larger screen. That GPU is a beast (I was going to not recommend it at all, but it might make sense for deep learning. Then again, any modern GPU is probably good for learning the stuff, then rent an array on Amazon/whoever when you need to do real training).
Have you ever used Linux before? It doesn't take much space on its own, and my Mint install (not including /home) is 8.7GB (home is 34G, but contains years of data and plenty of duplicates). I'd carve a few partitions off one 1T SSD and change the other to an Intel NVMe (or even cheaper device) and store anything big and not needing high speed (videos and steam library) on the Intel SSD. If gaming on Linux (not really recommended if you dual boot), then expect it to eat disk space just like windows gaming.
Storage doesn't make any sense, unless you are buying that 256G SSD for an emotus fusing with the HDD. Even then, 1TB HDDs don't make much sense and I'd look for something like the 4T Barracuda compute (although maybe not that one, I'd avoid Amazon HDD buys).
Note that I do use the emotus system, except that I just happened to have a 256SSD and a 2T HDD left over from my old system and use it for a steam library (so backup is irrelevant).
PS: sound card? Doing some rather specific audio work? They are rarely needed anymore.
What type of video card do you have? Also see if you can change your HDMI input from 1.0 to 2.0, that was something I needed to do to get my TV to show 4k@60Hz (the computer more or less automatically changed to fit the TV).
Another thing was that if you have "high dynamic range" enabled (I doubt your computer does), that might also interfere with the TV's ability to do 60Hz (I know mine can't do HDR 4k@60Hz, but haven't tried 1440p).
PS. Whatever you are using as a GPU (probably just the CPU?) sounds rather out of date. I know my GTX560 couldn't do 4k (although AMD cards of the time could) and that anything since can as well.
(my experiences using a TV as a monitor).
While I'd certainly disagree with "every task on Windows", video editing should involve reads more than long enough to matter. I'd certainly spring for a NVMe for this application.
Note that this is almost entirely about "large transfers". You don't need the fanciest NVMe device, just one that has a PCIe interface and high transfer rates (being fast in other cases gets expensive). Also this summer NVMe was often cheaper than SATA (I needed to buy a SATA drive for my mom), although solid-state memory prices seem to have gone up since then.
If you wanted to separate things, you could still partition your drive. I suspect that any advantages you'd see would be ruined when the 256G partition filled up and you had to move bits over to the "big partition". I doubt that many 256G drives can keep up with a 1TB drive (especially a NVMe).
I'm using a 1TB drive as a boot (and pretty much everything), with an old 256G melded with a 2TB drive with the AMD/emotus software. This is almost exclusively a Steam Library, so I'm not concerned with data corruption (I'll just download things again). So that's a pretty useful thing if you have AMD (450/470/570 motherboard) and a 256G drive lying around (and a HHD, but those seem to be even more common).
I'd search further, but I suspect the answer is "cheap or reliable: pick one". And that "cheap" really won't be anywhere as cheap as a AIO.
Again, CPUs really don't need such cooling. AMD doesn't need it at all (perhaps excluding threadripper), and I doubt Intel needs to go much further than "a really good [possibly air] cooler".
GPUs tend to be another story, but much harder to find proper blocks and require additional cooling of VRMs and memory.
The "competition" is based on $20k Intel (Xeon) CPUs. However overpriced it may be, it won't be relatively overpriced. It is a relatively good deal if you need most of a server's power, but not all the RAS server-specific stuff. It isn't a good deal at all if a Ryzen 9 (or i9) will get your job done.
They also can make more noise, especially when starting up or dealing with low heat levels. They can gurgle until they purge enough air, which bugs some people.
I also have a Ryzen 7 2700, and have been trying to figure out how to get my old AIO cooler to fit (a Cooler Master Seidon 120V). The "non-X" 2700 isn't designed to boost as much as the 2700X [it isn't supposed to violate the 65W limit] and is probably best manually overclocked (which requires good cooling). Once overclocked, I don't expect it to limit itself to 65W.
VASTLY superior for gaming, but does it help with lightroom, photoshop and Sony Vegas Pro?
The 1650 super should have the latest nvidia compression routines, plus all the latest CUDA options. I suspect that any time any of these programs can use the GPU for acceleration there won't be any measurable difference between GPUs (it will simply happen instantly).
Being nervous about your data is never good. A good, inexpensive way to backup data is an external (USB, 3.5") HHD. Preferably in the 3-8TB range, that way you can backup more than a single day's snapshot (or horde data if that's what you are into).
1.5TB of SSD can get expensive, depending on how full your systems are, but it sounds like most of the use is on the HDDs, not the SSDs. That's going to hurt.
"due to the amount of making/deleting/reading/writing of files I do." This can be read two ways:
"I spend a lot of time doing disk I/O": I need the speed of an SSD, and probably want a NVMe.
"I write over and over the same place a lot": this is certainly a threat to SSDs, although modern SSDs have an absolutely ridiculous amount of storage, making it next to impossible to wear them out (server databases can, but even the most "power user" shouldn't have a chance). I'd still look for a SSD with TLC over QLC (should last longer) and something with DRAM or SLC cache. Unfortunately I can't name good names (other than to avoid the cheap Intel NVMe card), although I think the favorite sabrant Rocket uses TLC and SLC cach (probably pseudo-SLC caching, which might be good enough).
This is a bit out of my element, but it is hard to kill a SSD these days by repeated writes. Something that wasn't always true. But I don't think SSDs give any type of warning like your noisy drives.
Samsung 2.5" 120GB SSD. If it is overflowing, you might look at a SATA 1TB drive. That is something you can easily "keep" when you want to replace the CPU/motherboard/RAM. If you decide to change both now, look for a NVMe card (there isn't that much difference, but it is faster and cheaper). Keep even if you "replace": although you might try using it in less orthodox way. If you go AMD you might try to use it as the "fast" half of a emotus combo (presumably with the WD 1TB drive, although don't do that if you want to keep backups on it).
WD 1TB SATA drive. Keep. Even if you only use it as backup (which is something it is ideally suited for).
Intel 4460 i5/MSI motherboard/8G DDR3. These all are replaced together or stay together (although you might consider adding some RAM if you are keeping it for awhile). While gaming has somewhat moved on, I really don't expect a whole lot of improvement from a new CPU+the rest.
NOTE: this has zero effect on getting graphics to high/ultra, so I'm guessing it is firmly in the "keep" category.
GTX960 GPU. This is probably still pretty strong. Granted, GPUs are one thing that they can still improve (even if the RTX gpus seem overpriced). You might want to look at some of the AMD (probably polaris, maybe navi) cards, and at least wait to see what is announced at CES. Note that 1080@60Hz isn't all that hard, so I'd look at an RX580/RX590 first (British prices seemed to have recovered their sanity from last I recommended one of those cards).
Case/fans: keep. Neither have had much to change in 5/6 years.
Completely depends on the game. For a driving/racing sim it is huge, and basically made visibility out of the cars a real thing and made the cars far more controllable (you can "look" around the turns). I've heard similar things for Elite, but haven't really started it. Of course these are both "seated apps", which ignores all the problems of the "roomscale" system that is pushed harder and requires the physical room, hardware, and user all work together to keep the user from falling over or crashing into things.
Other things are more hit or miss, and I'm only getting started on them. But I've been impressed for the most part. A favorite of mine is an exersize app "Hot Squats". Couldn't be more simple. A row of bars (more like blocks) travel toward you and you squat under them. Simple. Simple, but effective.