• arotrios@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    7 hours ago

    You’re correct - the standard tabs can only hold roughly 1.2 million rows.

    The way to get around that limitation is to use the Data Model within Power Pivot:

    It can accept all of the data connections a standard Power Query can (ODBC, Sharepoint, Access, etc):

    You build the connection in Power Pivot to your big tables and it will pull in a preview. If needed, you can build relationship between tables with the Relationship Manager. You can also use DAX to build formulas just like in a regular Excel tab (very similar to Visual Basic). You can then run Pivot Tables and charts against the Data Model to pull out the subsets of data you want to look at.

    The load times are pretty decent - usually it takes 2-3 minutes to pull a table of 4 million rows from an SQL database over ODBC, but your results may vary depending on datasource. It can get memory intensive, so I recommend a machine with a decent amount of RAM if you’re going to build anything for professional use.

    The nice thing about building it out this way (as opposed to using independent Power Queries to bring out your data subsets) is that it’s a one-button refresh, with most of the logic and formulas hidden back within the Data Model, so it’s a nice way to build reports for end-users that’s harder for them to fuck up by deleting a formula or hiding a column.

    • driving_crooner@lemmy.eco.br
      link
      fedilink
      arrow-up
      1
      ·
      6 hours ago

      Oh yes, I remember using power query for a few months once I started working with bigger databases, but I saw that moving to Python would be better carrer wise and never came back to excel to do actual work (but at the end everything get exported to excel)