Contribute
Register

What is the impact on Apple if NVIDIA were to buy ARM?

Status
Not open for further replies.
x86 is RISC based.


Really? The ever-expanding 8086+ Intel architecture is classed as Reduced Instruction Set?

I learned machine-code back when 8-bit CPUs were comprehensible. After that they got mighty complicated. Each new generation added more transistors and these generally meant new instructions. How does MMX, SSE to SSE4 etc count as RISC?

Happy to learn more. :thumbup:
 
Really? The ever-expanding 8086+ Intel architecture is classed as Reduced Instruction Set?

I learned machine-code back when 8-bit CPUs were comprehensible. After that they got mighty complicated. Each new generation added more transistors and these generally meant new instructions. How does MMX, SSE to SSE4 etc count as RISC?

Happy to learn more. :thumbup:
Intel Core cpus are RISC based, like all cpus nowadays. They only use the ancient x86 complex instruction set.
 
Intel Core cpus are RISC based, like all cpus nowadays. They only use the ancient x86 complex instruction set.


Okay ... So you are saying they are RISC architecture but parse CISC instructions? As the Intel line goes back to the late '70's, early 80's, when did they change from CISC to RISC dies?
 
Okay ... So you are saying they are RISC architecture but parse CISC instructions? As the Intel line goes back to the late '70's, early 80's, when did they change from CISC to RISC dies?
Yes, AMD is using RISC cores since the K5 and Intel since the Pentium Pro.
 
Yes, AMD is using RISC cores since the K5 and Intel since the Pentium Pro.


Fascinating. Maybe I should pick up an assembler again :thumbup:
 
I have 2 computers running High Sierra. One uses a GTX 1050 Ti OC and the other uses an entry-level GT 1030. On both of those computers when playing Unigine "Heaven" or "Valley" graphics benchmarks, the grass is green. I also have one computer running Mojave with an AMD RX 580 Nitro+. On that computer, the same apps, run on the same monitor, display green grass with blue tips. The nVidia cards display videos that look better, although the RX 580 is superior in fps.
Upgrade to Catalina then. Even Supplemental Updates have new graphics card drivers.
 
Really? The ever-expanding 8086+ Intel architecture is classed as Reduced Instruction Set?


@UtterDisbelief,

No Intel CPU's are classified as CISC CPU's, however the reality is somewhat more complex than that ...

Intel Core cpus are RISC based, like all cpus nowadays. They only use the ancient x86 complex instruction set.


@tom111 is correct in that Intel CPU's are a hybrid of CISC and RISC architectures.

Intel switched to this hybrid approach way back in the mid 90's with the P6 generation of Pentium Pro CPU's. At that time I was working for a company that received some of the first pre-production Pentium Pro's in the UK for use in our next generation industrial control system that we where developing and we where given a presentation by Intel UK that explained that this hybrid approach was the future of their CPU's.

Since then all Intel CPU's use a CISC instruction set externally but internally they use multiple parallel decoders to simultaneously decode the CISC instructions to RISC micro-ops which are then executed on RISC based cores.

However the decoding of CISC to RISC inside the CPU is very complex, it is basically a task specific, dedicated sub processor inside the CPU whose micro-code can be upgraded via ME Updates.

This intermediate layer of decoding CISC instructions to RISC micro-ops of course adds significant overhead to the instruction pipe-line so Intel introduced features such as speculative and out-of-order execution to help limit the overhead .... and we all know how that worked out for them :clap::clap:.

Because the x86 CISC instructions are basically defined in updatable micro-code firmware Intel is able to quickly and easily introduce new CPU features and fix security issues without having to physically change the CPU design, this is the key advantage of the hybrid CISC - RISC architecture and why we see Intel introducing new CPU features with each new generation of CPU. it is also why Intel CPU's can get slower over time ... each time a security flaw if found additional micro-code has to be added to fix the issue which in many cases can slow down the CISC to RISC decoding layer.

CPU's are generally classified by their instruction set and Intel x86 is a whopper, it is probably the most complex mass produced CPU instruction set on the planet as such they are classified as CISC CPU's despite them having RISC based architecture at their very core (pun intended :)).

Cheers
Jay
 
Last edited:
@UtterDisbelief,

@tom111 is correct in that Intel CPU's are a hybrid of CISC and RISC architectures.

Intel switched to this hybrid approach way back in the mid 90's with the P6 generation of Pentium Pro CPU's. At that time I was working for a company that received some of the first production runs of the Pentium Pro's in the UK for use in industrial control systems and we where given a presentation by Intel UK that explained that this hybrid approach was the future of their CPU's.

Since then all Intel CPU'S use a CISC instruction set externally but internally they use multiple parallel decoders to simultaneously decode the CISC instructions to RISC micro-ops which are then executed on RISC based cores.

However the decoding of CISC to RISC inside the CPU is very complex, its basically a task specific, dedicated computer inside the CPU whose micro-code can be upgraded via ME Updates. This of course adds quite a bit of overhead to the instruction pipe-line so Intel introduced features such as speculative and out-of-order execution of certain instructions to help limit the overhead .... and we all know hoe that worked out for them :clap::clap:.

Because the x86 instructions are basically defined in software (ME firmware) Intel is able to quickly and easily introduce new CPU features (and fix any issues) without having to physically change the CPU design, this is the key advantage of the hybrid CISC - RISC design and why we see more and more CPU features being added with each generation of Intel CPU.

CPU's are generally classified by their instruction set and Intel X86 is a whopper, it is probably the most complex mass produced CPU instruction set on the planet as such they are classified as CISC CPU's despite them having RISC based architecture at their very core (pun intended :)).

Cheers
Jay


Great explanation, thanks @jaymonkey. Makes it a lot clearer. :thumbup:

My own machine code/assembler career came to a halt with 80286. (Shows my age!). I switched to ANSI C as it was a logical step. Didn't go to "++" or "OO" though, just Kernigan and Ritchie. I then moved over to people management, hence my lack of the latest knowledge.

And yes, Intel's version seems a very complicated RISC !
 
My own machine code/assembler career came to a halt with 80286. (Shows my age!).


@UtterDisbelief,

Back in the day we where using Z80 for years past it's sell by date, everything was coded in machine code and burned into EPROMS (no assembler) which allowed us to have very optimised code for our control systems.

We then moved onto the Motorola 68000 which we programmed in assembler and stuck with to the end of the 80's, i was never a fan of the Motorola instruction set and the system was never as good as the Z80 based systems despite having a more powerful processor.

In the 90's the company wanted to move to Windoze for the front end to our control systems as that was becoming the standard in business systems and customers wanted to integrate control systems with their company IT systems, that led to me heading up the software design team for a new system, at the time we still had a very close working relationship with Intel which is how we ended up with pre-production Pentium Pro's for the development.

We designed our own PCI card that had an onboard processor to interface to our existing control systems bus, the card did all the signal conditioning and channel processing and used DMA to populate a reserved section of PC memory with the channel values, so all Windoze had to do was read that section of shared memory and display the results. It was a good system that had a long life and allowed us to update our customers to a much improved front end without having to update to I/O racks. That system lasted into the mid 2000's ... good times for sure, things where so much simpler back then.

Cheers
Jay
 
Last edited:
Great explanation, thanks @jaymonkey. Makes it a lot clearer. :thumbup:

My own machine code/assembler career came to a halt with 80286. (Shows my age!). I switched to ANSI C as it was a logical step. Didn't go to "++" or "OO" though, just Kernigan and Ritchie. I then moved over to people management, hence my lack of the latest knowledge.

And yes, Intel's version seems a very complicated RISC !

For me, back in the days, DEC PDP 15 symbolic assembler, Fortran-4, 6502 assembler, Basic, Turbo Pascal laced with 8086 assembler for speed, then C, 68k assembler, C++, some LISP, FORTH...
I slowly switched to using specialised tools made by others, when these became available.

I will occasionally code something in Ruby or Python, or do some DSP coding(not hard if you have a dev kit and some examples) just to flex my aging coding muscles.
 
Status
Not open for further replies.
Back
Top