SourceForge

! This program uses the tree/block-based amr package code to run
! a hydro calculation in 1D.
!
! AMR Written: Peter MacNeice
! January 2002
!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

program advection

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!


!-----------------------------------------------------------
! Include required header files. !----------------------------------------------------------- ! include file to set up problem dimensions. use paramesh_dimensions ! include file to define solution data structures and support arrays use physicaldata ! include file defining the tree use tree ! include interface block definitions for some non-mpi paramesh routines use paramesh_interfaces, only : & comm_start, & amr_initialize, & amr_close ! include interface block definitions for paramesh mpi routines use paramesh_mpi_interfaces ! Include required header file for mpi library include 'mpif.h' !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! !----------------------------------------------------------- ! local amr variables !----------------------------------------------------------- integer :: nprocs,mype logical :: lrefine_again integer :: lrefine_max integer :: lrefine_min integer :: ntsteps,iopt,nlayers save mype !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! !----------------------------------------------------------- ! initialize package !----------------------------------------------------------- ! amr package initialization - initializes all model independent stuff call amr_initialize
Call MPI_COMM_RANK(MPI_COMM_WORLD, mype, ierr)
Call MPI_COMM_SIZE(MPI_COMM_WORLD, nprocs, ierr)

!-----------------------------------------------------------
! Identify volumes to be associated with each ! distinct boundary condition !----------------------------------------------------------- ! When using MPI, additional information about the location of ! boundaries needs to be supplied. ! ! Define bounding boxes for the volumes in which each different ! boundary condition is applied. In this case we have 2 such volumes, ! the first to the left of the solution domain ( x < 0.0 ) and ! the second to the right (x > 1.0). In this example we assume the ! same condition, set iwith flag -21, is to applied at both boundaries. ! ! first set b.c. flags boundary_index(1) = -21 boundary_index(2) = -21 ! ! now set bounding box associated with flag 1 boundary_box(1,1,1) = -1.e30 ! low x boundary_box(2,1,1) = 0. ! high x boundary_box(1,2:3,1) = -1.e30 ! low y and z boundary_box(2,2:3,1) = 1.e30 ! high y and z boundary_box(1,1,2) = 1. ! low x boundary_box(2,1,2) = 1.e30 ! high x boundary_box(1,2:3,2) = -1.e30 ! low y and z boundary_box(2,2:3,2) = 1.e30 ! high y and z !----------------------------------------------------------- ! Setup up initial grid-block tree !----------------------------------------------------------- ! set limits on the range of refinement levels to be allowed. ! level 1 is a single block covering the entire domain, level 2 is ! refined by a factor 2, level 3 by a factor 4, etc. lrefine_max = 10 ! finest refinement level allowed lrefine_min = 6 ! coarsest refinement level allowed ! set the no of blocks required initially to cover the computational domain no_of_blocks = 2**(lrefine_min-1) ! initialize the counter for the number of blocks currently on this processor lnblocks = 0 ! begin by setting up a single block on processor 0, covering the whole domain if(mype.eq.0.) then lnblocks = 1 bnd_box(1,:,1) = 0. bnd_box(2,:,1) = 1. bsize(1) = bnd_box(2,:,1) - bnd_box(1,:,1) coord(1) = .5*bsize(1) nodetype(1) = 1 lrefine(1) = 1 neigh(1,1,1) = -21 ! initial block is not its own neigh(2,1,1) = -21 ! neighbor. hard wall bc neigh(1,2,1) = -22 ! initial block is not its own neigh(2,2,1) = -22 ! neighbor. symmetry bc refine(1)=.true. endif ! Now cycle over blocks until `no_of_blocks' leaf blocks have been created. loop_count=0 loop_count_max = int(log(real(no_of_blocks))/log(2.)+.1) do while(loop_count.lt.loop_count_max) do l=1,lnblocks refine(l)=.true. enddo ! refine grid and apply morton reordering to grid blocks if necessary call amr_refine_derefine loop_count=loop_count+1 enddo


!-----------------------------------------------------------
! set up initial solution on the grid blocks !----------------------------------------------------------- time = 0. dt = 0. call amr_initial_soln(mype) ! exchange guardcell information and boundary conditions iopt = 1 nlayers = nguard call amr_guardcell(mype,iopt,nlayers) ! end of initialization !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! ! set the no of timesteps ntsteps = 1001 !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! !----------------------------------------------------------- ! Begin time integration !----------------------------------------------------------- do loop=1,ntsteps ! perform a single timestep integration call advect_muscl(mype,dt,time,nprocs,loop) ! test to see if additional refinement or derefinement is necessary ! note - a call to amr_guardcell must come before this call to ensure that ! the refinement test can be done on parents of leafs also. This avoids ! a potential refinement/derefinement flip-flop happenning on successive ! timesteps. In this example we have placed a call to amr_guardcell ! before the return statement inside advect_muscl. call amr_test_refinement(mype,lrefine_min,lrefine_max) ! refine grid and apply morton reordering to grid blocks if necessary call amr_refine_derefine ! prolong solution to any new leaf blocks if necessary call amr_prolong(mype,iopt,nlayers) ! exchange guardcell information and boundary conditions call amr_guardcell(mype,iopt,nlayers) enddo ! end timestep integration loop !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! !----------------------------------------------------------- ! close amr package !----------------------------------------------------------- call amr_close() stop end program advection