Skip to content
GitLab
  • Menu
Projects Groups Snippets
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • openfoam openfoam
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 405
    • Issues 405
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 7
    • Merge requests 7
  • Deployments
    • Deployments
    • Releases
  • Wiki
    • Wiki
  • Activity
  • Graph
  • Create a new issue
  • Commits
  • Issue Boards
Collapse sidebar
  • Development
  • openfoamopenfoam
  • Issues
  • #2310
Closed
Open
Created Dec 18, 2021 by B H@bobuhito

Speeding up 2D simulations

EDIT: I would just delete this post, but it's not allowed. Nevermind my starting thought below (since I realize now that the phi is actually flux through the faces, so that causes the doubling and there is no waste here for 2D meshes because the faces to the unused dimension are not included). I assume that the code is already optimized for 2D structures, and no further easy speed up is known.

Considering the damBreak example, in which interFoam is run on a 2D structure, I noticed that "phi" and "alphaPhi0.water" output files are about double the size of the "p" file.

I am guessing this is because they store data for each point in the mesh (not each cell in the mesh like "p" does), right?

In these 2D simulations (since the mesh for 2D is just points in 2D repeated in the unused dimension), this seems a little wasteful, doubling some space and time requirements (because CPU is probably using the doubled phi representation). Then again, I might be misunderstanding this, so please correct me if I am wrong.

Anyway, I'm wondering how much speed up would be possible if the code were optimized for 2D structures?

Edited Dec 19, 2021 by B H
Assignee
Assign to
Time tracking