Przejdź do głównej zawartości


If I understand the concept of #FPGA properly, it uses "generic" electronic modules (FPG arrays) that can be (reversibly?) "personalized" to behave like a custom desinged circuits.
To personalize them, we need:
1. Some design and development environment, where we design and test specific configurations for given functionality. And this part is rather resource-intensive.

2. Some "flashing" environment to "imprint" existing models onto generic FPGA modules. Which is much less resource-intensive than the previous part.

Now, I can imagine that, for the sake of #maintenance and #resillience, I may use FPGAs to build all kinds of devices around them. Then, just in case of failure (EMP, maybe) I can safely stock a pile of generic modules, an imprinter and a library of models so I can replace whatever is broken, without looking for a specialised chip.

Does it make sense and why not? ;-)

#postapo #hitec #critical #infrastructure #collapse #doomer musings.
in reply to 8Petros [Signal: Petros.63]

You've got the general idea. The development environment consists of (typically) a hardware definition language (#vhdl or #verilog), and a toolchain (usually vendor specific, though there are some open source ones) that synthesizes the code into a logic equation list. And finally a toolchain that takes that netlist and maps it into the individual small discrete functions and configures the routing between them.

There is volatile memory on the device that stores the current configuration and must be loaded (many ways) on power up. There are a few models that have non-volatile memory that stores the configuration and loads itself. In high radiation environments, there are even fewer (and very expensive) models that have write-once-non-volatile memory for configuration.

I don't think your idea of using them as a generic purpose drop in replacement for things is particularly a good idea. The board environment is honestly more important than what you're doing in logic.
in reply to Remi

Thank you for clarifying it for me. I am playing with the concept for now and - if I put my hands on an appropriate equipment - I may try to test the idea within some simple context. No pressure from me - ATM it is just something to give me a bit of mental distraction between one e-learning module I translate, and another. :-)
in reply to 8Petros [Signal: Petros.63]

The reason why #1 is resource intensive is because it relies on simulated annealing to determine which logic is performed on which logic blocks, to maximize performance within the design constraints. It's the same technique used in this video, for a different application. https://www.youtube.com/watch?v=Lq-Y7crQo44
in reply to Tathar makes stuff

Importantly, all the resource-intensive part is happening in the toolchain's place-and-route algorithm. The HDL modules, and the conversion of them into gate-level logic, are cheap by comparison. From there, the toolchain has to fit that logic into whichever physical logic blocks are the best fit for timing, thermal, and pin constraints, by deciding where to place them on the die and how to route them together for a good-enough connection. Hence the name.
in reply to Tathar makes stuff

As for #2, those models are just what you get when you apply the resource-intensive algorithm to just a part of the final design, and treat that "solved" part as one big component to be placed and routed as a monolith. Although it takes less time when that part of the work is already done for you, it's also possible for that approach to produce a less-optimal solution on the FPGA.
in reply to Tathar makes stuff

Also, if you're using the same design across multiple chips of the same FPGA model, you can flash all of them with the same bitstream file generated from #1. You only have to do the resource-intensive part once.