I suspect that the primary argument for a 200G[E] is that you weren't going to populate the second channel anyway. A 2200G is worth much more once you take into account the motherboard, ram, monitor, windows license (especially if you need a windows license). The $40 difference gets lost in the noise.
If you cheap out as much as possible, with single channel ram, Mint instead of Windows, reused monitor (or TV), reused keyboard & mouse, 120G SSD (please don't use SDHC) that the advantages of the 2200G begin to disappear and the $40 difference becomes significant.
If the OS/Programs drive isn't used often and the smaller drive isn't used for storage, you would still be better off combining that into a roughly 512GB (480?) drive. It will simply be able to access any of the data faster, with possibly and exception if it is accessing both the cache drive and the OS/Programs drive at the same time (and will still likely be faster than the 128GB drive). Buying one drive should be cheaper, faster, and easier to manage than two separate drives. You can get the best of both worlds (performance of bigger drives, management of separate drives) by simply partitioning the drive.
Combining SSD drives past 512GB probably doesn't buy you much (performance doesn't increase), and even less if you often access both at the same time. I can also imagine filling up a case with a HDD array (I'd be equally temped to stuff them in an 8GB DDR3 craiglist special and run FreeNAS on it, but that would be hard to cache). Past that I'd still expect that most of the "lots of SSD" is thanks to buying SSD everytime the price dropped or you filled one up.
So two SSDs (OS/Programs + Cache) (1TB Mass storage). Three HDDs (4TB bulk storage [cached]) (internal backup HDD) (external backup HDD). Sounds like "4 or 5" to me (depending on if you count the external one or not).
The OS/Programs might get a bit cramped if you start using this computer much outside of video editing, but there certainly will be plenty of room to stuff the overflow.
Three SSDs doesn't make any sense (unless you are just reusing old ones, which I suspect is the case). Also, do you already own the 1TB HDDs? The price difference between 1TB and 3TB isn't all that much: is the next 2TB of data really worth less than $20 (per drive)?
Raid 1 doesn't increase space. It just means you can keep going while you buy another drive. And remember to back things up to those backup drives (and seriously considering bumping them up to 3TB if these aren't ones you already have). Raid 0 increases space (and possibly speed of bulk transfer) but is half as safe as regular drives (plus additional ways for your OS to corrupt it). RAID 5 is the best of both worlds (redundancy plus increased storage), but still doesn't work with StoreMI (make sure your motherboard "supports" RAID5 before expecting to use it in Windows (although higher-end copies of Windows should have better means of storage). I'm fairly sure that getting a tiering system working is going to be more important than getting a RAID system working: you'll either need a tiering system compatible with your RAID or manually sloshing data from HDD+SDD to RAID.
StoreMI works with one fast and one big drive, so fancy drive arrays won't work (I haven't really dug into other tiering options, but you might want to check them. Also backups become even more critical when having more and more things that go wrong and corrupt your data). That said, I really expect you want a tiering system for working on video: lighting fast load/saves of whatever you are working on, while not having to manually slosh data from SSD to HDD.
If you really want to go down the rabbit hole: https://www.reddit.com/r/DataHoarder/ will take it to any extreme.
Keeping the whole price under $400? I'd try to swing a AMD2200G. Can't beat it's value if you want CPU+GPU power (but expect to pay more for memory).
Paying $400 for a CPU? Not such a good idea. For gaming, I'd put the excess in a GPU, for anything else in the monitor. Some great chips (its hard to go wrong with any):
i5 8600k: Probably the per-thread champ and the best for gaming and nearly all other tasks. Almost as strong for the occasional thread-heavy tasks and comes at a great price. ($240)
AMD R7 2700x: If you can fill the threads, this beast will be the fastest (short of something like a threadripper). Does its thing without need for overclocking (if you care). Pretty good at single thread (expect that even a 2080ti will limit your framerate in games): $320
i7 8700k: bit of a compromise between the above: all the single thread power of the i5 8600k, with maybe 10-20% more power if you can fill 12 theads (but still won't catch the 2700x): $320
6 core AMD: good prices, but if you are willing to spend $400, I'd just go straight to the i5 instead. Saving $200 vs. $160 isn't much of a value if you care about single thread performance. On the other hand, you could buy one of these with the idea of upgrading to a Zen2 in 2019-2020 (don't ever assume you can re-use an Intel motherboard). If you really want to spend $400 wisely, I'd go so far as to say "buy a 2200G today and a $300 Zen2 after prices stablize late 2019. Keep 5+ years".
Via bought Cyrix. They also bought out Taurus (the Winchip people). I think they sold a few winchip followup CPUs branded as Cyrix, but Via closed Cyrix down and only kept the name (they probably kept producing some cyrix chips: the mediaGX [owned by National?] seemed to live forever).
Intel killed Cyrix by suing them to the point where they were paying lawyers $9 for every $1 in engineer salaries (which was probably what Intel's payroll looked like at the time (they sued everybody), but it was enough to still design chips and maintain process superiority).
I've had 3 Cyrix-based machines,
3 Intel based machines,
4 AMD based machines (one of the AMD machines hardly counts, it was cobbled together from an Intel machine, an unused CPU, and about $50 in cheap geeks.com parts and replaced in about a year).
And I guess two MOS technologies 6502 that my parents bought (can't forget that Atari 400 and later 800).
[sorry about the spacing, that's the only way I know how to include linebreaks in pcpartpicker.com]
I really loved my Cyrix machines, they were great for the price (even if I played all that quake on a Cx686).
I prefer AMD, mostly on price and typically get more than I pay for (my second machine was a "32 bit" Duron that mostly ran Linux in 64 bit mode).
I'm inevitably irked with Intel chips, as they always seem to remove features included on the chip (AMD has only recently done that with <8 core zens). That said, one of my Intel chips was a 300A Celeron (a computer later renamed "Methuselah" thanks to how long I kept it and how long it was still useful. Of course, this also involved a 533A overclocked to 800MHz when the original chip stopped being happy at 450MHz).
So I'll admit to being biased to AMD, mostly because you typically get what you pay for and more with them, while Intel has to chop features off a perfectly working chip until they will part for it for your price (the only thing holding back my 300A was the "official" clockspeed. That was a really dirty trick to snuff out the K6, but I bought it anyway). But I have to admit that from Sandy Bridge to Zen, Intel was almost always the better choice.
As far as "AMD bought them", it seemed that at the time ATI was buying all the companies with cool vaporware*. I don't think anything ever came of any of them, but AMD owns them all now (although Cyrix [bought by Via] and 3dFX [bought by nVidia] were the ones I mainly cared about).
First, the next big leap in performance will be either late 2019 early 2020 with cards from both nvidia and AMD made to TSMC 7nm. This will be a true "generation after the 10x0 series" and the last such generation for at least 5 years (although expect "generations" like the 780 to 980 (same 28nm process) or the 1080 to 2080 (16nm to 12nm)).
Can you really turn V-sync on @60Hz (or more) at 4k with a 1080ti? My guess is that will AMD's goal with Navi (especially for the PS5, and Sony is paying for Navi development). This would also be a great goal for a 2160.
Raytracing has a long way to go, especially in the mainstream (again, a lot will depend on the 2160, and expect it sometime late 2019, early 2020). I'd expect this to bring amazing graphics to Unity level budgets, but we shall see if it takes off or not.
I still want VR. Basically while human visual resolution is something like 32,000x32,000 you don't have 1024 gigapixels in your eyes (nor can your brain process that many gigapixels. You have more like 6M pixels, or somewhere between HD and 4K, with almost all of it within the first 10 degrees or so from exactly what you are looking at, and the rest falling off fast.. With strong enough foveated rendering, you can completely max out human vision with VR (or possibly just one renderer per person watching a large screen, but that would be weird). Since Moore's law is pretty much at the end of the road for GPUs, I'd strongly suggest giving up "brute force" rendering and only rendering what you know a human eye is looking at.
Anyone seen a good place to download DBAN? The original location is dead, and I try to avoid freshmeat (known for adding malware to installation programs).
Until recently, lack of DRAM on a SSD was a terribly bad sign that you were buying the cheapest of the cheap SSD. I'd certainly recommend something more like a MX500 (or any crucial SATA) as a "bottom price".
But since then I've seen reasonably good reviews of things like ADATA NVMe cards, which while certainly not competing with the top line NVMe cards, they competed on price with the Crucial SATA lines and seemed to equal or beat any SATA performance spec. I'd want to dig into as many benchmarks of anything without DRAM, but don't be too surprised if a reputable company has made a chip that is "pretty good for the price" out of it.
Just don't expect top performance without DRAM (or similar buffer memory).
Depends on the user. I've known plenty of people that never use more than ~100GB of their hard drive space, but they tend not to show up on places like pchardwarepicket.com
Checking out HDD prices: (more or less cheapest hard drives, since there are about 2-3 real manufactures left it doesn't matter).
1TB = $43
2TB = $54
3TB = $65
4TB = $95
I recommend going straight to the 3TB HDDs, just for three times the storage at a 50% higher price. 120G is really small (256GB is mostly where SSDs start nowadays), but the price is certainly right. I can't say I've had hard drive corruption due to using too much capacity (possibly not true, I've killed at least one filesystem and may have overfilled it), but it isn't a good thing to do (your write speed will always suffer from overfilling a drive, and HDDs really can't afford to lose any more performance).
I did a detailed analysis on the size of the silicon vs. the cost of the board here: https://pcpartpicker.com/forums/topic/293649-depressed-about-rtx-me-too
Compared to various "assumed costs", the new nvidia boards aren't that bad, but still appear to be a "side grade" into raytracing instead of a pure upgrade into rastering onto 4x monitors. Nvidia is certainly selling a lot of features, but without software to run them they are worthless to me.
I'd strongly recommend waiting a generation for nvidia to use improved silicon (7nm) and actually have software available that might even use the RTX features.
I was wondering about this, so decided to try to make some reason of nvidia's prices and costs.
Far left is the name of the card. Next is how many square mm of silicon it takes (this is basically nvidia's cost, although it doesn't scale linearly, but should be close enough for GPUs). Next is some attempt to scale the price of the card to the price of the mm**2 of silicon. I lop $100 off the price of the board (to cover everything else) and then divide that by the size of the silicon. Since the 1070 cards seemed to low, I looked up how much of the silicon was locked (compared to the 1080) and controlled for that (the percentage in the next to last column and the price in the last column).
Note that prices for Pascal are mostly eyeballing the cheapest "recommended, i.e. near the top" cards while Turing cards include the "founders tax" (I don't expect you will be buying them without it any time soon).
board size cost cost/mm locked% cost/mm(unlocked) size scaling cost/mm w/ scaling
“1070 314 360 $0.83 85.11% $0.97 1.00 $0.97
“1070ti 314 400 $0.96 95.00% $1.01 1.00 $1.01
“1080 314 440 $1.08 100.00% $1.08 1.00 $1.08
“1080ti 471 650 $1.17 100.00% $1.17 1.00 $1.17
“2070 445 600 $1.12 100.00% $1.12 0.75 $0.84
“2080 545 800 $1.28 100.00% $1.28 0.75 $0.96
“2080ti 754 1200$1.46 100.00% $1.46 0.75 $1.09
Results: Prices pretty much follow costs, with prices going up as size grows plus some "extra price" for really big chips (which is expected). I doubt nvidia makes more profit on 1 2080ti than two 1080tis.
The really weird thing is for all the expected cost of the "raytracing hardware", the 650mm2 of the 1080ti is halfway between the 545mm2 of the 2080 and the 754mm2 of the 2080ti. Between architectural improvements, 16nm to 12nm process improvements, and the switch to GDDR6 they manage to keep abreast of the old guard. Of course all the benchmarks we've seen must be assumed to have been carefully chosen by nvidia. HardOCP (and I presume others) should be preparing benchmarks from boards they have directly purchased without signing any nvidia NDAs. After coming up with this I added my final "with scaling" column to reflect this. Note that I used a single dimension, because silicon scaling doesn't give you 4 times the transistors for half the pitch, and if single dimension scaling isn't exact, it works embarrassingly well for math that appears botched.
The real question is if you want what nVidia is selling. It looks like they have added Raytracing to the price of the old Pascal hardware. If you have a Pascal board (10x0), this won't be an upgrade (unless you go straight to the top) until Raytracing (or other RTX tech) games come out. I'd call it a pass, and wait for 21x0, preferably with AMD competition, but both might wait for 2020.
Edit: let me gripe about forcing to use "C notation squares" instead of my old-school "Fortran notation squares" (luckily for me Python uses the old-school FORTRAN notation).
AVX-512 has a nasty tendency to reduce clockrates in Intel chips. Plenty of obvious cases (mostly for copying data) aren't used simply to avoid having the clockrate nearly fall in half. If I were AMD, I'd tread lightly here and probably include it by doing each AVX-512 instruction in at least two passes: some of the instructions look extremely useful, but might take more than two cycles (512 bit interger multiply?). I'd also look into assumptions that both threads are trying to maximize AVX use and build accordingly: allowing more AVX256 (or AVX512 halves) to be dispatched at once and/or allowing for slightly worse latency on AVX instructions.
From AMD's perspective, anything running on AVX512 probably should be run on a Vega/Navi core anyway. Zen cores are small, and spamming more Zen cores beats going crazy making AVX too wide (and thus making Zen cores too big).
20-30% IPC will never happen (per thread). At best we can hope that AMD found a few things in Bulldozer that actually worked (not many, but there had to be some) and might know how to get more work out of the second thread (perhaps letting it have its own resources, including its own L1 cache).
Clockspeeds approaching 5GHz are much more likely, as they will finally have a process similar to Intel's.
There are a ton of ways to improve memory access (unfortunately Intel appears to control Micron's access to 3dXP memory options), but expensive brute force access really isn't going to help Zen as much.
Might be thermals, but if it is only happening in a beta test there is always the chance that they are simply software bugs. I've never seen anything like that thanks to software, though (any pointer bad enough to write to video ram is bad enough to crash the whole program).
Except it is a $99 24" monitor that is unlikely to go much above 1920@60Hz, so I wouldn't be too worried about the monitor's "value" (don't buy a ~$400-$500 GPU to "protect your investment" in a $99 monitor). Right now, the AMD580 is roughly the best value in that range.
Personally, for general use and casual gaming I'd budget the monitor first (since you'll spend all your time on the computer looking at it) and if centered around gaming then the monitor + GPU together. The 24" monitor sounds hard to beat, and a lot depends on just how much space is available on your desk (I'd love a 30"-40" 4x TV, but would have to re-arrange my desk). Also don't expect to be able to see a much higher resolution than 1920x1080 on a 24" monitor (you should be able to tell if anti-alias is on or not, but expect diminishing returns).
I'd also recommend checking ebay/craigslist/site_of_your_choice for used 1070 and 1080 prices in a week or so (when 2080s start shipping). You probably won't be able to tell the difference between them and a RX580 on an inexpensive monitor (a 1080 should simply always have a frame ready at any resolution/refresh it can show), but it should also remain true on later titles. I can't tell you what those prices will be, but I don't expect new low-midrange pricing changing for the next year (until the 2150-2160 + AMD cards are released, which could be as late as 2020). Don't blame me if you find a great deal on such a card and then blow the budget buying a monitor that could try to keep up with the GPU.
Xeons (and x86 CPUs in general) simply aren't as dependent on the Bios/drivers for day to day action. They do have some way to alter the microcode, but accessing this is one of Intel's (and AMD's) deepest secrets and unlikely to be available to the public (and wouldn't change that much anyway).
I'd be surprised if flashing a geforce to a quadro really improved gaming performance (unless it unlocked formerly locked cores). Mostly I'd expect it to enable quadro drivers and ideally enable better double precision floating point (useless for games).
I don't think AMD has included unlockable parts since Bulldozer. I know you can't unlock CPUs (you've pretty much always been allowed until then, but none of those chips were competitive without the silicon lottery tickets anyway. I can't see them letting you unlock Vega cores, either (I'm almost positive you can't unlock a Vega56 to a Vega64).
Movies don't need better than 30P, but sports (and other live broadcasts) tend to want at least 60Hz. If there isn't a director/cinematographer who can frame the action to prevent obvious framerate artifacts, you are going to need 60Hz.
I've also read a lot of reviews about issues about TVs running at 30P and mouse issues (too much mouse lag), and this was mostly from TVs incapable of 4x@60Hz thanks to HDMI issues. I'd recommend getting a monitor capable of 60Hz just for general use, even if the GPU can only just run at 60Hz for normal desktop use (and faceplant if asked to run crysis at full resolution). HTPC could go either way, I'd expect you can deal with a bit of lag for simple menus, but how hard can HD (not 4x) @60Hz be, anyway? Just run your HTPC menus at something the thing can handle 60Hz...
Going higher than that is for gaming only, young eyes only, and probably getting into the "gamers are the new audiophiles" territory. I'd recommend doing a blind test at your nearest Microcenter or similar before paying for enough GPU/monitor to run at high refresh.
I'd have to agree, although I'd include an option for the 570/580 in there as well.
I wouldn't count on Navi to make it out in 2019, AMD has been rather quiet on that area.
Valve even claims a Steam port! No windows license required (although expect a performance beating in Linux:steam even in 720P).
Beyond dealing with halfhearted Linux drivers, you also have the issue that the above BOMs have a single stick of RAM: I'm sure the ones benchmarked had two (I don't expect anyone to buy two for an Athlon 200). But the whole issue is the same with either AMD or Intel buys.
Doesn't seem remotely interested in doing so. The motherboard is just as expensive as a 2200G. The memory is a big issue: you might get away with a single channel, but expect it to be as slow or slower than the Intel. The Intel has superior graphics thanks to just how many Vega cores are turned off: you need at least twice as many cores on to match Intel.
There's also the cost to AMD. These athlons are almost certainly full-scale Raven-Ridge cores. We don't know how busy GF is and if it makes any sense for AMD to bother changing the masks to cut down Raven-Ridge for these cheap cores (masks are expensive). Zepplin (which still make Epyc processors, which should be selling well thanks to Intel product shortages) costs as much to manufacture. Pinacle Ridge (2xxx Ryzen) might cost a touch more (same size, slightly better process) but roughly the same.
The really irritating thing about the chip is that so many Vega cores are turned off I expect Intel to have a significantly better IGP. That doesn't make sense when buying a cheap AMD chip. My guess is that they are steering as many customers toward that $99 2200G product as they can.
This isn't the "old AMD" that had to bottomfeed in the market and let you try your luck unlocking cores that may have been locked thanks to market segmentation or may have simply failed. This is an AMD where every "cheap chip" comes at the cost of producing a server chip that might well sell in place of an Intel chip*. AMD might lock those Vega cores to keep demand low.
"tiering" is when the computer handles moving things from the HDD to the SDD and back for you. It works assuming you play mostly a small subset of games at one time, or mostly use your HDD for media. If you are just as likely to play one game as the next, it won't work at all (although I can't imagine the learning curve needed when "randomly" choosing one of 2TB of games).
I really don't think developers tune games to work on HDDs anymore, I'd assume that you would expect to have it on an SSD. But either moving it your self or having the computer move it for you (tiering) when you fire it up can be much cheaper when buying storage by the TB.
I'd be surprised if there is a Titan for this board (12nm Turing). There are already 3 different Quadros using this chip, plus the 2080 ti. I'd be fairly surprised if they manged to find any more ways to segment the market than 4 boards with one chip.
Granted different engineering (and primarily marketing) build the boards, but I'd expect most of nVidia's energy is going into making a next generation 7nm board (possibly more like a "tock" architecture, but I suspect that enough work is required to go from 12nm to 7nm that it has to be much more than a "tock"). If nvidia wants to produce a Titan, I'd expect it from this follow-on architecture.
The biggest problem here is that nvidia certainly will try to prevent any leaks (they certainly seemed to keep the words "ray tracing" from bouncing around too loudly) about their 7nm strategy, at least until AMD gets ready to release Navi (7nm Vega shouldn't threaten nvidia at all). At that point they will have to at least update the x050 and x060 boards (competing with the 2x00G series depends more on Intel than the GT1030 boards) although I'm not counting on AMD to threaten the 2080 and above boards (AMD needs to get 4x better than 60Hz, and isn't going to do any hardware raytracing). We might see something interesting for VR (Sony is paying the bills, and Sony may have shipped more VR headsets than anyone else), but don't count on head-to-head competition with Turing.
Nvidia may want to update Volta first: Turing's hardware raytracing really doesn't have any competition. The machine learning (FP16) bits of it certainly do (from Google, AMD, and others). nVidia may feel the need to update Volta to 7nm first (they may call it something else, but a huge chip primarily used for ML and HPC would be a "Volta successor"). Such a chip might get the Titan treatment (but probably only if it included enough hardware raytracing to fit nvidia's "high end boards raytrace" marketing mantra).
If you have to ask (and you've filled up drives before), then no.
For everyone else (and if you are building it for someone else), then possibly less (I can't believe how empty some "non computer people" leave their hard drives). I'd assume that if you are buying drives by "the terabyte", you still want a HDD. On the gripping hand if you are just "piling up the numbers" to match your budget, I'd have to ask if you ever filled up a TB before. Past 10-20% headroom, there is little benefit for empty space in a hard drive (at least in the TB class).
But $90/TB? I'm guessing you are looking at a ~$150 dollar jump for the second TB vs. a ~$60 hard drive. Your call. But remember that $60 drive will store 3TB and flash storage appears to one of the few things in computers that is still continually getting cheaper: you could try getting the 1TB "SSD only" computer now and upgrading in a year or so. If you are building an AMD system I'd recommend going straight to a B450 motherboard and their StoreMI tiering system with a (possibly NVMe) drive (512GB would do) and a 3-4TB HDD. There are other tiering systems available, and might work fine with an intel system (you can buy the software supplied to AMD from the original company, but I doubt the extra cost is worth dealing with tiering. Expect to pay more than half the $90 difference just for the software).
Windows has some nasty habits of absolutely having to paw all over your drives, requiring drives to spin up for no reason and (presumably) inspect each file (perhaps for icons? More likely to send data back to Redmond) [I've noted that sorting uge directories for size takes forever on current windows and was virtually instant in XP; my guess is that it isn't happy with just looking at the directory data and has to paw through each file]. If you want to just let Windows do its thing, getting rid of HDDs might be helpful (although most of these issues should go away with tiering).
Never a "great value" (unless you were buying it for fp64 (supercomputing) or fp16 (machine learning) performance, then it was ideal). Just "less terrible" than everything else.
The only benefit an AMD FX8 would have is that it could take DDR3. Comparing CPU+motherboard, the Pentium would be the same price and 2-3x more powerful (for everything the parents are likely to do). If you wanted to cheap out on something like 4G of DDR3, it might work. But the Intel system makes more sense overall.
The AMD FX8 will basically give the performance of the Chromebook I pushed earlier. The catch is the Chromebook costs ~$200 (with 1.6GHz intel chip and 4GB) and all the "little things" (especially if you can't transfer the Windows license) quickly add up to a lot more than that. But the whole point of the Chromebook is that you don't have to administer windows (i.e. remove all those toolbars in your favorite browser to finally see the screen) everytime you show up at your parents' place. For "basic internet", ease of maintenance is more important that performance (even the cheap ARMs in a tablet tend to be mostly waiting for the internet, and the chromebook has a "real" out-of-order "intel CPU" inside (an atom-class, not derived from Sandy Bridge. But still enough for that type of work).
That changes things slightly, in that a GDDR5 1030 might run $110CAD, a RX550(512) might run $125CAD and a RX550(640) $135.
I'm less convinced that there is a clear advantage to the (512) AMD, so I'd happily go with the 1030 (but make sure it has GDDR5). But this is in no way like UK pricing where the AMD cards are completely priced out of the market.
It might be an upgrade but don't buy the thing based on the GDDR5 scores unless you are sure the thing has GDDR5.
Pcpartpicker has separate choices for GT1030 (~$100 to ~$85) and GT1030 DDR4 (~$90-$70). But it looks like the AMD RX550(640) might be the better all around deal (I suspect it eats more power, but that shouldn't be a problem unless Dell was being too Dellish. Check for available power connectors on the power supply).
Be careful of the GT1030 as you will likely see benchmarks comparisons for the GDDR5 based boards and see DDR4 (which are half as fast) boards for sale.
I'd second the used (especially AMD) market, but try to get one that wasn't a miner favorite. 1680x1050 resolution makes things considerably easier, but the real entry level cards will probably require tuning the graphics down on the latest games.
Nvidia has been dumping all their Pascal 10xx chips on their board sellers recently, and they need to get them out the door before next Thursday (when 2070 and up drop). I doubt there is anything wrong with the boards other than about to be a "generation behind" in a week.
I'd assume that results would be the same as for StoreMI and a drive with only a DRAM cache (measured in MB). One difference is that as a tiering system, StoreMI will want to move the data to the SSD, so any data previously accessed on the HDD will now be on the SDD (completely wasting the cache's main strength). You might still get a lot of hits on the HDD "reading ahead", but you would get most of them with a tiny DRAM buffer (typically) unless there were just too many files open to buffer.
With 80GB it is doubtful they are installing applications willy-nilly. I'd look at a chromebook first, unless there is some particular app that is necessary. Otherwise they will have to learn to maintain win10 coming from XP or Vista.
If you do go with the PC, "performance" is likely irrelevant (they don't have any now. Does it even have RAM?). On the AMD side I'd look at the 2200G, but suspect you'd really want Intel (have they ever used whatever GPU was on the E5700?). I'd expect service life of the SDD to be slightly higher than the HDD, as those 1TB HDDs are simply cut down everywhere they can (I normally suggest bumping them straight up to 3TB, but if 80GB was never changed, they aren't going to fill up 512, let alone 3TB). I wouldn't worry about a 256G SSD, but doubt there is any reason to go below that now.
I also think that the required 8GB of RAM will cost almost as much as the Chromebook (not to mention the way "the little" parts add up), but the real beauty of the Chromebook is (hopefully) zero need of administration.
PS: my mother had this long drawn out sob story about how she had so much trouble finding a win10 notebook and getting it to work (her second notebook wouldn't connect to the internet until I disabled edge to use explorer to download chrome. Touch anything connected to the internet and Edge would pop up and immediately crash). A few months after finally getting this all straight she mentions that what she thought she was buying was what my aunt had: a Chromebook.
I think you have to choose between Intel and Raven Ridge for any sort of conventional GPU-less system. You can look for AMD ITX boards with built-in graphics, but expect anything claiming such to be "with a Raven Ridge or other ALU". Old school bulldozer motherboads did have built in graphics, and there might be a Ryzen board out there with same, but don't count on it with an ITX board where every square inch counts.
For less conventional systems you might try to pull a "remote desktop" via an internal second lightweight computer (graphics performance will be worse, but expect to lose all hope of graphics performance without a GPU). Raspberry Pies are an obvious choice, but I wouldn't count on them to run conventional "remote desktop" systems (but look there first). Windows claims a "windows port" to raspberry pi computers, see if it can run a remote desktop.
[second choice. Presumably runs all the software you need]
To be honest, I'd seriously consider raspberry pi's stronger competitors in building a "computer I can stick in my pocket". Don't expect anything that can run with an i3 or 2200G in performance if you go that route (unless you are going for 64 threads or similar "embarrassingly parallel" routes). But at least it will fit in a only slightly unreasonable pocket.
For less conventional means, consider "external laptop GPUs". These are a royal pain, but might get things done.
To an embarrassing degree, all a motherboard does is give a means of tying the processor, the RAM, the video card, and some means to plug in various stuff from the outside together. The "quality" comes down to features and reliability. If you are pushing overclocking, expect to require deeper reserves in power supplies and signal quality (which is generally along the lines of the reliability budget).
Choose the CPU first (and AMD has an additional advantage in that it should allow you a chance to upgrade the CPU again. If you are going on a five year plan, there should be a few more AM4 compatible CPUs being designed. Just grab one when they go on clearance in 2021 or whatever.
The only other thing to watch for in motherboards is the number of RAM slots. RAM seems to be on a high cycle right now and you want to be able to add more when it goes low without removing/selling the RAM you already have. 4 RAM slots should help.
If you look at the overall AMD design, it isn't clear where the bottlenecks are killing parallelism. The block diagram clearly shows that each core should be as capable as the previous Intel (Westmere, unfortunately for AMD Intel came out with the mighty Sandy Bridge roughly when AMD first shipped Bulldozer). The only obvious issue was using a shared decoder: the decoder "should" produce 4 instructions per cycle (2 instructions per core per cycle was about the limit), but my guess is that this really couldn't be achieved. Decoding x86 instructions is a bit of a pain, and one of the reasons that x86 can't compete with ARM on the smallest (and weakest) systems.
The next place AMD comes up short is ALU pipelines. Intel can operate 3 ALU instructions and 2 AGU (load/store) instructions (this is true for both Westmere and Sandy Bridge) while AMD can only do 2 ALU and 2 AGU per cycle.
One of the big ways AMD screwed up bulldozer is that cutting these units out only works if you can increase clockrate accordingly. The complexity of scheduling these units tends to increase geometrically as execution units increase, so AMD thought they could win by clocking faster. Unfortunately, at the same time AMD was firing all their circuit designers and moving everything to automated place and route, which could never match the speed laying circuits out by hand (not sure how zen works, between 10 years of advancement and the shear complexity of 14-10nm design rules the software may be winning).
As far as CMT vs SMT, any architect must be careful just how far down the rabbit hole they plan on going. ARM seems to do well without SMT (mostly because it is more power efficient not to use it) and goes "all the way through" to separate cores. Decode should clearly be separate for each thread (to be honest I'd strongly recommend this even in otherwise SMT computers and simply powergate the second threads decoder when not in use). Another thing that designers would love to duplicate is the L1 cache. L1 cache is limited to the biggest cache you can access in 4 cycles, duplicating it would give you twice the cache for no cost in latency. The catch is that this new cache is likely further away from the registers (which we probably want to duplicate anyway, along with the RAT) and then we duplicate the execution units to be close to the registers.
At this point we've pretty much followed the same path as the bulldozer (plus a duplicated decoder). Of course, if you start out with a Zen you already started with a strong single core thread, so maybe it would work. It all comes down to just how much space the duplicated L1+execution units take up (probably not all that much) and how effective power gating them is. Thanks to the failure of bulldozer, I'm fairly sure nobody at AMD wanted to even think about CMT, but I'd be curious if they can take the lessons learned and build a CMT out of the sufficiently strong zen. The engineers might not want to risk the failure and the company might not want the PR of "a second bulldozer", but I think they could get away with it with it if they merely called it a "stronger SMT" and not double count the cores.
There's also the issue of floating point. If benchmarks assumes "everyone is running cinebench 24/7", then CMT has no real advantages over SMT and obviously has some real costs in silicon area and heat/power. On the other hand, AMD could similarly strengthen the AVX unit such that it takes two CMT threads to saturate it.
The parallelism of the Bulldozer vs. Westmere:
(and start at the beginning for more than you ever wanted to know about bulldozer):
PS: this shouldn't be a case of "20-40% idle", each CMT thread has 20% less execution units in single threading mode but 80% more execution units when both "cores" are active. Had the decoder been the issue I'm reasonably sure the follow-on (steamroller, etc) could have added a second decoder. There were deeper issues involved, likely too many compromises between the parallelism in the high level design and just how much each pipeline stage could really accomplish in the amount of picoseconds they had between clock cycles.
Is the HDD 256GB? Will it fit on the SSD? If not, expect some issues with coping the thing over (you'll probably need to reinstall everything manually, although you can at least copy a Steam library into place and it will only download the files needed to "install" and not re-download all the data you copied into the Steam library).
Also consider buying an extra HDD as backup anyway. You've had a warning with this drive, you might not get a warning the next time.
As long as AMD was stuck in the "cheap chip" game with low performance for low price then GF was a good fit for AMD. When AMD is making designs capable of competing head to head with Intel (and hopefully nvidia) then GF merely holds AMD back with substandard processes (it might be cheap, but higher performance would justify a much higher price than the cost of the wafer).
It all comes down to the effects of the wafer agreement and what it says/how it is negotiated.
GF is unlikely to ever produce a chip competitive with either Intel or Nvidia's silicon. Anything made by GF for AMD will be a support chip (such as a chipset) or possibly a chip where a previous generation might still sell (it is reasonably possible that AMD might continue to sell Raven Ridge 2400G and 2200G chips, especially if required by the wafer agreement).
I'm sure that it occurred to AMD that GF might eventually "throw in the towel" (something like 24 silicon manufacturing companies have given up attempting to compete in state of the art chip fabbing over the last decade or two) and I can't imagine that there isn't something in the agreement about "complete unwillingness to fill said wafer with competitive chips" in the agreement.
Then again this is AMD, who was willing to not only wildly overpay for ATI, but did so in cash instead of a stock swap with doomed the company.
Back in the dawn of time (late 1960s), back when Moore was laying down his law, there was also the idea that there was a "wheel of karma" for computer graphics. Users would want the next big thing, engineers would create custom circuitry to do same, CPUs would take over the task, and repeat.
I've lived long enough to see this a few times, just in PCs/home computers. And that was a decade after it was written up.
In the beginning Woz created the Apple. The CPU moved pixels around a frame buffer.
Then Atari and Commodore decided that they wanted more action in games and created hardware sprites.
[others might claim that the sprites pre-dated the frame buffer, thanks to Jay Miner's work on the 2600...]
The Amiga tried to push hardware graphics further, but Macintosh and the IBM PC seemed to prefer having the CPU work with framebuffers.
Then windows came along, and slowly video cards were built around 2d acceleration (drawing boxes for windows and moving them around (1992-1996?).
By 1996 video cards had removed everything thing but BITBLT from video cards (although they could do that at least as wide as the RAM interface). The CPU was doing nearly everything.
Along came 3dFX with 3d graphics.
By roughly DX9, shaders have taken over. GPUs look a lot like some form of vector CPU and handle all the graphics. This only increases with things like Mantle/Vulcan/DX12.
Nvidia brings in raytracing and breaks up the GPU into shaders, AI circuitry, and raytracing specific circuitry.
Personally, I'm not going to buy any graphics card that isn't a 7nm card, and unlikely to buy until both nvidia and AMD put their 7nm cards on the table. I'm assuming that TSMC isn't going to significantly best their 7nm process for at least 5 years, so this may be the first chance to knowingly "futureproof" a purchase of silicon ahead of time (Intel's Sandy Bridge and nvidia's 8800 were simply extraordinary designs, which wasn't obvious until later designs had troubles improving on them, but this time it is obvious that the silicon isn't going to improve as much).
Not only am I happy to wait for 7nm, but I also am more than happy to wait until more RTX enabled games show up. It may take years, especially if Microsoft has AMD build its next console for them without any RTX hardware (no idea about rumors that 2060 will be GTX only, but it can't help). The only reason to buy hardware is to run software, and if the software isn't there don't buy the hardware.
This runs headlong into the issue of buying a 7nm board in hopes it will last 5 years. Obviously a non-raytracing card won't last 5 years if raytracing really takes off, while a card with low pixels/second won't last 5 years if VR takes off. Hopefully there will be enough games to make a clear decision as to whether raytracing is compelling enough or not (with Sony paying for development I'd at least hope that AMD will make a card worthy of consideration, especially if VR is a concern, but won't bet on it. RTX may well win by default).
One big thing about RTX is that it all but abandons VR. RTX is all about taking your time and painting pixels exactly right, even if resolution and framerate suffer. VR is all about low latency and massive numbers of pixels (and latency matters more, but who really wants a screen door effect). It sounds like nvidia is all about flat screens and has decided to pass on VR.
Sony appears to be one of the biggest players in VR and is paying for Navi. What does that mean? We'll know sooner or later, but maybe not until 2020 (certainly not until late 2019, unless Sony leaks PS5 details to smaller developers). Both AMD and Microsoft have included means of doing raytracing on GPUs, but don't expect that to compare with nvidia's methods (although be aware of the wheel of karma. But if Moore's law is no longer turning it, maybe not).
One reason for the "4X treadmill" is that I want a (big) 4X monitor for 2d computing. Running that in HD might be acceptable (if the AA and pixel colors are sufficient), but I'd still want 4X if possible. I'm not remotely interested in >>60Hz, although part of that maybe older eyes, but I'm a wierdo who has chosen resolution over framerate for a very long time. Note that more or less sustained framerates near 90fps may be necessary for VR, so that might require a large amount of pixels/sec.
If you get sufficient performance by turning down visuals (and not lowering simulation quality) than you have more than enough CPU to do the job (which is a good thing, as it might blow the budget).
PCars2 seems to like nvidia cards. Assetto Corsa appears to work well with even what you have (but is due for an upgrade). Didn't look up the rest, but either of these cards should do wonders for what you are looking for.
The 2700 is ideal for overclocking and should hit 4.2GHz like any other zen+ while the 1800x will be 10% slower (when equally pushed). It will also be much noisier as reducing overclocking isn't nearly as efficient as boost. XFR enhanced and precision boost go out the window anyway on overclocked systems.
If you want a loud speed demon, then the 2700 begins to make sense. If you are willing to give up a bit less than the last 15% (which uses naive clock scaling which you won't get) then get the 1800X. I'm a bit unsure of the difference between the 1700 and the 1800X once boost is accounted for, if you are adding an aftermarket cooler anyway the 1700 might make more sense.
Best guess is that this is a scavenged Raven Ridge chip and the Intel parts could easily be built on a 2 core mask. I don't think AMD wants to sell many of these (they can make the 2200G/2400G and various mobile chips for the same price), otherwise they would have at least provided "Vega 6" which would presumably match the G4920 (a naive scaling shows that the AMD will perform twice as slowly as the Intel part).
AMD is probably feeling out the market and getting resellers used to working with them. While this is almost certainly a Raven Ridge chopped up, don't be too surprised if they produce a similar thing in 2020 (or 2021) that doesn't require such butchery. Intel has been stalled at 14nm for 5 years (with certain improvements, but no real scaling), and AMD may well plan on shipping 7nm parts for another 5 years (which would justify creating a separate mask for 2 core parts. Or perhaps they won't see a reason to ship below 4 cores, much like most ARM phones have at least that many ARM cores). Whatever the case, AMD has had to make do with very few specific chips for all their SKUs, but this might change soon.
On the flip side, Intel uses as few masks as possible. I think at one point they were using the same mask for celerons, pentiums (back when that was the name of their main desktop processor) and xenons. Doing one thing very well and repeating it as often as possible has made Intel very, very rich.
Expect to see cheap laptops like this sooner or later, possibly even at a lower price point (smaller 1366 x 768 LCD are cheap). Unfortunately, also expect the single stick of memory. Of course with half the cores and less than half the GPUs, it might get away with half the bandwidth. Single core performance will certainly suffer with only one channel, and that will be much more painful on a system like this. I'd also certainly prefer a $25 Inland Pro 120GB SSD to the rotating drive listed here (and it could also fit in a laptop easier).
I had to check the cheapest Optane I could find, and came up with a 16GB NVMe that cost more than the 4GB RAM stick, so I doubt it is worth it (although for jobs that need 4-16GB it would work surprisingly well. Just don't expect to justify the cost beyond 8GB.
There have been tales of nvidia dumping stock on their board partners, best guess is that is what is happening. According to the infallible wiki, the 570/580 are built on 232mm2 of 14nm process while the nvidia is made on 200mm2 of 16nm process (which are close enough to be the same, these numbers don't tell you all that much), so cost to manufacture might as well be the same (not so with Vega and HBM2).
After a quick check, it appears a UK market issue. Here are the prices in the states (topmost selected):
1060 (3G): $215
1060 (6G) $278
Can't tell you when economics, politics, and marketing will come to their senses. In the US, the 580 is the obvious choice. I mentioned the used market, but used 570s and 580s are some of the most heavily used mining cards out there, it might make more sense to simply jump to a used 1080 (less loved mining card as it is hungry for electricity).
This looks "too good to be true", but appears to be the only "US pricing" in the EU, I'd think you can get it before brexit is official.
Capacity is most important, anything overflowing to the HDD is going to be slow. Gaming with long level loads might be able to tell the difference between SATA and NVMe, but there and booting are about the only times you could tell (which might drive people to the ADATA NVMes).
Until recently I'd have said that DRAM buffers are required (leaving them out was the mark of "too cheap" drives). The ADATA XPG SX6000 doesn't appear to have any, and doesn't appear to hurt performance (although check reviews, I haven't slapped my money down on one).
There's little point to RAID0 with SSDs, SATA drives have a single high speed link to the motherboard, cheap NVMe (like the ADATA mentioned above and Inland Pro) have two links, the expensive NVMe have four. Using two cheap NVMe in parallel might have some advantages over a "real" NVMe, but I'd worry about confusing the "drivers" (they tend to need to use system DRAM) and you also have twice the likelyhood of losing all your data (over the danger of the "cheap drives"). I wouldn't go this route.
The point of RAID1 is to keep going in the face of a lost drive. Since human error (including "I didn't put that crypto-ransom virus on my machine") is more likely than drive self-bricking, this really isn't a good backup method. Get an extra HDD, they are cheaper and much more reliable (if you remember to back it up). Leave fancy drive mirroring to enterprise hard drives to enterprises (of course, if your income stops when your hard drive stops, your computer is an "enterprise system". Pay what you need for reliability).
If you are at all interested in AMD's StoreMI software, it is pretty picky about "one fast" and "one slow" drive. It is also available to everyone else at $40 (small drives) or $60 (up to 1TB of "fast drive") http://www.enmotus.com/fuzedrive
I'd also look into the possibility of using 3 separate cards (presumably ~1080 or so) and giving each monitor a card. This should avoid those pesky SLI issues, although it isn't clear how this is suited for each game.
There's also motherboard compatibility. Few motherboards supply even x8 to three cards, so you would likely find one (or more likely two) cards being fed at x4 (which might not hurt as much as you'd think).
This seems to be going out of favor, and limited googling didn't tell me anything about this, so it might not be the possibility I thought it was. But don't expect simple solutions (other than using a 1080ti and accepting 40fps).
This seems a bit much. I also can't remember ever hearing of specific accounts required to download drivers tied to your specific board. Perhaps they are tired of people flashing boards with "pro" bioses. It also looks like the 2080ti is the same chip as the "pro Turing" meaning that flashing the bios could make a huge difference in raytracing power (several thousand dollar differences).
I don't recall Intel and AMD expecting fans to pre-order a month in advance and refuse to give any form of benchmarks other than "it's twice as fast. Trust us."
When you attach a TV to a computer you have to be careful about latency and framerate. Things to look for are "refresh rate" "refresh rate technology" and "game mode".
Ideally it will include reviews of people who have used it as a monitor, or at least attached a console to it. Even if you aren't gaming, you don't want to mouse to respond with a lot of lag (even wimpy cards should be able to do non-gaming things at 4k@60Hz, although your notebook's HDMI interface might be an issue). Basically anything with 60Hz response rate and a "game mode" should be fine (although read all the fine print to make sure that "60Hz" means 60Hz and not slapping some motion blur on a 60Hz signal displaying 30Hz. Until recently that was the common case among potential 4k TV monitors).
Then don't worry about it. 5600rpm is probably better.
While they say that every year, I've never seen the levels of hiding the data before. Can't tell whether this is because nvidia shoved a ton of unsold chips at their resellers or because the performance isn't there.
Nvidia will get maybe 10% improvement from the half-node process (the 7nm process is the one to watch, and that should be ready soon. Expect AMD to delay and delay Navi and nvidia to wait for them to use this delay to sell 20x0s before shrinking them to 7nm 21x0s). You'll also get a boost from the GDDR6 ram. The rest will have to come from bigger (and more expensive) chips (that are already full of specific "RTX" circuitry).
The Alienware is a pretty extreme monitor, and I'd expect that a RX580 wouldn't go much further than 1080@60fps. I'd look at a used vega 64 (even with all the mining issues) and expect to upgrade again to Navi in a year or more.
If you are going with the RX580, I'd recommend keeping the monitor and upgrading both when Navi drops (or possibly a better monitor with gsync and a nex-gen nvidia).