python - Pandas memory error -


I have a CSV file with 50,000 rows and 300 columns I is causing the following error in Pundes (Python) :

  merged_df.stack (0) .reset_index (1)   

looks like a data frame:

  GRID_WISE_MW1 Col0 Col1 Col2 .... Col300 7228260 1444 1819 2042 7228261 1444 1819 2042   

I do not have a bug with dataframes with the newest panda (0.13.1) and fewer rows (~ 2,000)

Thank you!

Then it takes me to 64-bit Linux (32GB) storage, slightly less than 2 GB. [5]: def f (): df = DataFrame (np. Random.randn (50000300)) df.stack () in

  In Reset_index (1): [6]:% memit f () Maximum 1: 1791.054688 MB per loop   

Since you did not specify it will not work at 32-bit at all (as That you generally can not allocate 2GB nearby blocks), but if you have proper swap / memory, then you should work.

Comments

Popular posts from this blog

Verilog Error: output or inout port "Q" must be connected to a structural net expression -

jasper reports - How to center align barcode using jasperreports and barcode4j -

c# - ASP.NET MVC - Attaching an entity of type 'MODELNAME' failed because another entity of the same type already has the same primary key value -