From 23d6748c68da1bae7398bf1af9add33c0f3627b3 Mon Sep 17 00:00:00 2001 From: Bryony Nickson Date: Thu, 6 Feb 2025 17:42:45 -0500 Subject: [PATCH 1/4] Adding MIRI Coron notebook --- .../Coronagraphy/JWPipeNB-MIRI-Coron.ipynb | 1568 +++++++++++++++++ notebooks/MIRI/Coronagraphy/requirements.txt | 5 + 2 files changed, 1573 insertions(+) create mode 100644 notebooks/MIRI/Coronagraphy/JWPipeNB-MIRI-Coron.ipynb create mode 100644 notebooks/MIRI/Coronagraphy/requirements.txt diff --git a/notebooks/MIRI/Coronagraphy/JWPipeNB-MIRI-Coron.ipynb b/notebooks/MIRI/Coronagraphy/JWPipeNB-MIRI-Coron.ipynb new file mode 100644 index 0000000..d60fc67 --- /dev/null +++ b/notebooks/MIRI/Coronagraphy/JWPipeNB-MIRI-Coron.ipynb @@ -0,0 +1,1568 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "651c89d8", + "metadata": {}, + "source": [ + "\"stsci_logo\" " + ] + }, + { + "cell_type": "markdown", + "id": "aa04a6ac", + "metadata": {}, + "source": [ + "\n", + "# MIRI Conagraphy Pipeline Notebook #" + ] + }, + { + "cell_type": "markdown", + "id": "ff2fb85c", + "metadata": {}, + "source": [ + "**Authors**: B. Nickson; MIRI branch
\n", + "**Last Updated**: Jan 28, 2024
\n", + "**Pipeline Version**: 1.14.1 (Build 10.2)" + ] + }, + { + "cell_type": "markdown", + "id": "d988f765", + "metadata": {}, + "source": [ + "**Purpose**:
\n", + "This notebook provides a framework for processing generic Mid-Infrared Instrument (MIRI) Coronagraphic data through all three James Webb Space Telescope (JWST) pipeline stages. Data is assumed to be located in separate observation folders according to the paths set up below. Editing cells other than those in the [Configuration](#1.-Configuration) should not be necessary unless the standard pipeline processing options are modified.\n", + "\n", + "**Data**:
\n", + "This example is set up to use F1550C coronagraphic observations of the super-Jupiter exoplanet HIP 65426 b, obtained by [Program ID](https://www.stsci.edu/jwst/science-execution/program-information) 1386 (PI: S. Hinkley). It incorporates observations of the exoplanet host star HIP 65426 at two separate roll angles (1 exposure each); a PSF reference observation of the nearby star HIP 65219, taken with a 9-pt small grid dither pattern (9 exposures total); a background observation associated with the target star, taken with a 2-pt dither (two exposures); and a background observation associated with the PSF reference target, taken with a 2-pt dither (two exposures). \n", + "\n", + "The relevant observation numbers are:\n", + "\n", + "- Science observations: 8, 9
\n", + "- Science backgrounds: 30
\n", + "- Reference observations: 7
\n", + "- Reference backgrounds: 31
\n", + "\n", + "Example input data to use will be downloaded automatically unless disabled (i.e., to use local files instead).\n", + "\n", + "\n", + "**JWST pipeline version and CRDS context**\n", + "This notebook was written for the calibration pipeline version given above and uses the context associated with this version of the JWST Calibration Pipeline. Information about this and other contexts can be found in the JWST Calibration Reference Data System (CRDS) [server]((https://jwst-crds.stsci.edu/)). If you use different pipeline\n", + "versions, please refer to the table [here](https://jwst-crds.stsci.edu/display_build_contexts/) to determine what context to use. To learn more about the differences in the pipeline, read the relevant [documentation](https://jwst-docs.stsci.edu/jwst-science-calibration-pipeline/jwst-operations-pipeline-build-information).\n", + "\n", + "**Updates**:\n", + "This notebook is regularly updated as improvements are made to the pipeline. Find the most up to date version of this notebook at:\n", + "[https://github.com/spacetelescope/jwst-pipeline-notebooks/](https://github.com/spacetelescope/jwst-pipeline-notebooks/)\n", + "\n", + "**Recent Changes**:
\n", + "Jan 28, 2025: Migrate from the `Coronagraphy_ExambleNB` notebook, update to Build 11.0 (jwst 1.15.1)." + ] + }, + { + "cell_type": "markdown", + "id": "241d9868", + "metadata": {}, + "source": [ + "
" + ] + }, + { + "cell_type": "markdown", + "id": "f191a7b7-a02d-4063-b224-b39b384a8709", + "metadata": {}, + "source": [ + "## Table of Contents\n", + "\n", + "1. [Configuration](#1.-Configuration)\n", + "2. [Package Imports](#2.-Package-Imports)\n", + "3. [Demo Mode Setup](#3.-Demo-Mode-Setup-(ignore-if-not-using-demo-data))\n", + "4. [Directory Setup](#4.-Directory-Setup)\n", + "5. [Detector1 Pipeline](#5.-Detector1-Pipeline)\n", + "6. [Image2 Pipeline](#6.-Image2-Pipeline)\n", + "7. [Coron3 Pipeline](#7.-Coron3-Pipeline)\n", + "8. [Plot the spectra](#8.-Plot-the-spectra)" + ] + }, + { + "cell_type": "markdown", + "id": "e6dd1599", + "metadata": {}, + "source": [ + "
" + ] + }, + { + "cell_type": "markdown", + "id": "bae53dc6", + "metadata": {}, + "source": [ + "1.-Configuration\n", + "------------------\n", + "Set basic parameters to use with this notebook. These will affect what data is used, where data is located (if already in disk), and pipeline modules run on this data. The list of parameters are as follows:\n", + "\n", + "* demo_mode\n", + "* directories with data\n", + "* mask\n", + "* filter\n", + "* pipeline modules" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cad6387d", + "metadata": {}, + "outputs": [], + "source": [ + "# Basic import necessary for configuration\n", + "import os" + ] + }, + { + "cell_type": "markdown", + "id": "9f88f6ee", + "metadata": {}, + "source": [ + "
\n", + "Note that demo_mode must be set appropriately below.\n", + "
\n", + "\n", + "Set demo_mode = True to run in demonstration mode. In this mode, this\n", + "notebook will download example data from the\n", + "Barbara A. Mikulski Archive for Space Telescopes (MAST) and process it through the pipeline.\n", + "This will all happen in a local directory unless modified\n", + "in [Section 3](#3.-Demo-Mode-Setup-(ignore-if-not-using-demo-data)) below. \n", + "\n", + "Set demo_mode = False if you want to process your own data that has already\n", + "been downloaded and provide the location of the data.
" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "b2f6fd5e", + "metadata": {}, + "outputs": [], + "source": [ + "# Set parameters for demo_mode, mask, filter, data mode directories, and \n", + "# processing steps.\n", + "\n", + "# -------------------------------Demo Mode---------------------------------\n", + "demo_mode = True\n", + "\n", + "if demo_mode:\n", + " print('Running in demonstration mode using online example data!')\n", + "\n", + "# -------------------------Data Mode Directories---------------------------\n", + "# If demo_mode = False, look for user data in these paths\n", + "if not demo_mode:\n", + " # Set directory paths for processing specific data; these will need\n", + " # to be changed to your local directory setup (below are given as\n", + " # examples)\n", + " user_home_dir = os.path.expanduser('~')\n", + "\n", + " # Point to where science observation data are\n", + " # Assumes uncalibrated data in sci_r1_dir/uncal/ and sci_r2_dir/uncal/, \n", + " # and results in stage1, stage2, stage3 directories\n", + " sci_r1_dir = os.path.join(user_home_dir, 'FlightData/APT1386/data/Obs008/')\n", + " sci_r2_dir = os.path.join(user_home_dir, 'FlightData/APT1386/data/Obs009/')\n", + "\n", + " # Point to where reference observation data are\n", + " # Assumes uncalibrated data in ref_dir/uncal/ and results in stage1,\n", + " # stage2, stage3 directories\n", + " ref_dir = os.path.join(user_home_dir, 'FlightData/APT1386/data/Obs007/')\n", + "\n", + " # Point to where background observation data are\n", + " # Assumes uncalibrated data in sci_bg_dir/uncal/ and ref_bg_dir/uncal/,\n", + " # and results in stage1, stage2 directories\n", + " sci_bg_dir = os.path.join(user_home_dir, 'FlightData/APT1386/data/Obs030/')\n", + " ref_bg_dir = os.path.join(user_home_dir, 'FlightData/APT1386/data/Obs031/')\n", + "\n", + "# --------------------------Set Processing Steps--------------------------\n", + "# Whether or not to process only data from a given coronagraphic mask/\n", + "# filter (useful if overriding reference files) \n", + "# Note that BOTH parameters must be set in order to work\n", + "use_mask = '4QPM_1550' # '4QPM_1065', '4QPM_1140', '4QPM_1550', or 'LYOT_2300'\n", + "use_filter = 'F1550C' # 'F1065C', 'F1140C', 'F1550C', or 'F2300C'\n", + "\n", + "# Individual pipeline stages can be turned on/off here. Note that a later\n", + "# stage won't be able to run unless data products have already been\n", + "# produced from the prior stage.\n", + "\n", + "# Science processing\n", + "dodet1 = True # calwebb_detector1\n", + "doimage2 = True # calwebb_image2\n", + "docoron3 = True # calwebb_coron3\n", + "\n", + "# Background processing\n", + "dodet1bg = True # calwebb_detector1\n", + "doimage2bg = True # calwebb_image2" + ] + }, + { + "cell_type": "markdown", + "id": "4a6ef261", + "metadata": {}, + "source": [ + "### Set CRDS context and server\n", + "Before importing CRDS and JWST modules, we need to configure our environment. This includes defining a CRDS cache directory in which to keep the reference files that will be used by the calibration pipeline.\n", + "\n", + "If the root directory for the local CRDS cache directory has not been set already, it will be set to create one in the home directory." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c2c53535", + "metadata": {}, + "outputs": [], + "source": [ + "# ------------------------Set CRDS context and paths----------------------\n", + "# Each version of the calibration pipeline is associated with a specific CRDS\n", + "# context file. The pipeline will select the appropriate context file behind\n", + "# the scenes while running. However, if you wish to override the default context\n", + "# file and run the pipeline with a different context, you can set that using\n", + "# the CRDS_CONTEXT environment variable. Here we show how this is done,\n", + "# although we leave the line commented out in order to use the default context.\n", + "# If you wish to specify a different context, uncomment the line below.\n", + "#%env CRDS_CONTEXT jwst_1293.pmap\n", + "\n", + "# Check whether the local CRDS cache directory has been set.\n", + "# If not, set it to the user home directory\n", + "if (os.getenv('CRDS_PATH') is None):\n", + " os.environ['CRDS_PATH'] = os.path.join(os.path.expanduser('~'), 'crds')\n", + " \n", + "# Check whether the CRDS server URL has been set. If not, set it.\n", + "if (os.getenv('CRDS_SERVER_URL') is None):\n", + " os.environ['CRDS_SERVER_URL'] = 'https://jwst-crds.stsci.edu'\n", + "\n", + "# Echo CRDS path in use\n", + "print('CRDS local filepath:', os.environ['CRDS_PATH'])\n", + "print('CRDS file server:', os.environ['CRDS_SERVER_URL'])" + ] + }, + { + "cell_type": "markdown", + "id": "17ec7020", + "metadata": {}, + "source": [ + "
" + ] + }, + { + "cell_type": "markdown", + "id": "8cd0c995", + "metadata": {}, + "source": [ + "## 2.-Package Imports\n", + "------------------" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "61e3464a", + "metadata": {}, + "outputs": [], + "source": [ + "# Use the entire available screen width for this notebook\n", + "from IPython.display import display, HTML\n", + "display(HTML(\"\"))" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c7191bfd", + "metadata": {}, + "outputs": [], + "source": [ + "# Basic system utilities for interacting with files\n", + "# ----------------------General Imports------------------------------------\n", + "import glob\n", + "import copy\n", + "import time\n", + "from pathlib import Path\n", + "import re\n", + "\n", + "# Numpy for doing calculations\n", + "import numpy as np\n", + "\n", + "# -----------------------Astropy Imports-----------------------------------\n", + "# Astropy utilities for opening FITS and ASCII files, and downloading demo files\n", + "from astropy.io import fits\n", + "from astropy.wcs import WCS\n", + "from astropy import units\n", + "from astropy.coordinates import SkyCoord, Distance\n", + "from astropy import time\n", + "from astroquery.mast import Observations, Mast\n", + "\n", + "# -----------------------Plotting Imports----------------------------------\n", + "# Matplotlib for making plots\n", + "import matplotlib.pyplot as plt" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "59fdfe7e", + "metadata": {}, + "outputs": [], + "source": [ + "# --------------JWST Calibration Pipeline Imports---------------------------\n", + "# Import the base JWST and calibration reference data packages\n", + "import jwst\n", + "import crds\n", + "\n", + "# JWST pipelines (each encompassing many steps)\n", + "from jwst.pipeline import Detector1Pipeline\n", + "from jwst.pipeline import Image2Pipeline\n", + "from jwst.pipeline import Coron3Pipeline\n", + "\n", + "# JWST pipeline utilities\n", + "from jwst import datamodels # JWST datamodels\n", + "from jwst.associations import asn_from_list as afl # Tools for creating association files\n", + "from jwst.associations.lib.rules_level2_base import DMSLevel2bBase # Definition of a Lvl2 association file\n", + "from jwst.associations.lib.rules_level3_base import DMS_Level3_Base # Definition of a Lvl3 association file\n", + "\n", + "from jwst.stpipe import Step # Import the wrapper class for pipeline steps\n", + "\n", + "# Echo pipeline version and CRDS context in use\n", + "print(\"JWST Calibration Pipeline Version = {}\".format(jwst.__version__))\n", + "print(\"Using CRDS Context = {}\".format(crds.get_context_name('jwst')))" + ] + }, + { + "cell_type": "markdown", + "id": "0f6131e0", + "metadata": {}, + "source": [ + "### Define convenience functions" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a62c2c44-1fc8-4041-a24f-a5d00a3a9ceb", + "metadata": {}, + "outputs": [], + "source": [ + "# Define a convenience function to select only files of a given coronagraph mask/filter from an input set\n", + "def select_mask_filter_files(files, use_mask, use_filter):\n", + " if (use_mask != '') & (use_filter != ''):\n", + " keep = np.zeros(len(files))\n", + " for ii in range(0, len(files)):\n", + " with fits.open(files[ii]) as hdu:\n", + " hdu.verify()\n", + " hdr = hdu[0].header\n", + " if 'CORONMSK' in hdr: \n", + " if ((hdr['CORONMSK'] == use_mask) & (hdr['FILTER'] == use_filter)):\n", + " keep[ii] = 1\n", + " indx = np.where(keep == 1)\n", + " files_culled = files[indx]\n", + " else:\n", + " files_culled = files\n", + " \n", + " return files_culled" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "2a2b7895", + "metadata": {}, + "outputs": [], + "source": [ + "# Start a timer to keep track of runtime\n", + "time0 = time.perf_counter()" + ] + }, + { + "cell_type": "markdown", + "id": "f16fc7ce", + "metadata": {}, + "source": [ + "
" + ] + }, + { + "cell_type": "markdown", + "id": "c0cfec9f", + "metadata": {}, + "source": [ + "3.-Demo Mode Setup (ignore if not using demo data)\n", + "------------------\n", + "\n", + "If running in demonstration mode, set up the program information to\n", + "retrieve the uncalibrated data automatically from MAST using\n", + "[astroquery](https://astroquery.readthedocs.io/en/latest/mast/mast.html).\n", + "MAST allows for flexibility of searching by the proposal ID and the\n", + "observation ID instead of just filenames.
\n", + "\n", + "For illustrative purposes, we focus on data taken through the MIRI\n", + "[F1550C filter](https://jwst-docs.stsci.edu/jwst-mid-infrared-instrument/miri-observing-modes/miri-coronagraphic-imaging#MIRICoronagraphicImaging-CoronFiltersCoronagraphfilters)\n", + "and start with uncalibrated raw data products (`uncal.fits`). The files use the following naming schema:\n", + "`jw0138600001_04101_0000_mirimage_uncal.fits`, where *obs* refers to the observation number and *dith* refers to the\n", + "dither step number.\n", + "\n", + "\n", + "More information about the JWST file naming conventions can be found at:\n", + "https://jwst-pipeline.readthedocs.io/en/latest/jwst/data_products/file_naming.html" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "196d8895", + "metadata": {}, + "outputs": [], + "source": [ + "# Set up the program information and paths for demo program\n", + "if demo_mode:\n", + " print('Running in demonstration mode and will download example data from MAST!')\n", + " program = \"01386\"\n", + " sci_r1_observtn = \"008\" \n", + " sci_r2_observtn = \"009\" \n", + " ref_observtn = \"007\" \n", + " bg_sci_observtn = \"030\" \n", + " bg_ref_observtn = \"031\"\n", + " basedir = os.path.join('.', 'miri_coro_demo_data')\n", + " download_dir = basedir\n", + " sci_r1_dir = os.path.join(basedir, 'Obs' + sci_r1_observtn)\n", + " sci_r2_dir = os.path.join(basedir, 'Obs' + sci_r2_observtn)\n", + " ref_dir = os.path.join(basedir, 'Obs' + ref_observtn)\n", + " bg_sci_dir = os.path.join(basedir, 'Obs' + bg_sci_observtn)\n", + " bg_ref_dir = os.path.join(basedir, 'Obs' + bg_ref_observtn)\n", + " uncal_sci_r1_dir = os.path.join(sci_r1_dir, 'uncal')\n", + " uncal_sci_r2_dir = os.path.join(sci_r2_dir, 'uncal')\n", + " uncal_ref_dir = os.path.join(ref_dir, 'uncal')\n", + " uncal_bg_sci_dir = os.path.join(bg_sci_dir, 'uncal')\n", + " uncal_bg_ref_dir = os.path.join(bg_ref_dir, 'uncal')\n", + "\n", + " # Ensure filepaths for input data exist\n", + " input_dirs = [uncal_sci_r1_dir, uncal_sci_r2_dir, uncal_ref_dir, uncal_bg_sci_dir, uncal_bg_ref_dir]\n", + "\n", + " for dir in input_dirs:\n", + " if not os.path.exists(dir):\n", + " os.makedirs(dir)" + ] + }, + { + "cell_type": "markdown", + "id": "f668f138", + "metadata": {}, + "source": [ + "Identify list of uncalibrated files associated with visits." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "28d6be3d", + "metadata": {}, + "outputs": [], + "source": [ + "# Obtain a list of observation IDs for the specified demo program\n", + "if demo_mode:\n", + " obs_id_table = Observations.query_criteria(instrument_name=[\"MIRI/CORON\"],\n", + " provenance_name=[\"CALJWST\"],\n", + " proposal_id=[program]\n", + " )\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e8cae672-e129-465e-bb7f-0254f78adc6d", + "metadata": {}, + "outputs": [], + "source": [ + "# Turn the list of visits into a list of uncalibrated data files\n", + "if demo_mode:\n", + " # Define types of files to select\n", + " file_dict = {'uncal': {'product_type': 'SCIENCE', 'productSubGroupDescription': 'UNCAL', 'calib_level': [1]}}\n", + "\n", + " # Loop over visits identifying uncalibrated files that are associated with them\n", + " files_to_download = []\n", + " for exposure in (obs_id_table):\n", + " products = Observations.get_product_list(exposure)\n", + " for filetype, query_dict in file_dict.items():\n", + " filtered_products = Observations.filter_products(products, productType=query_dict['product_type'],\n", + " productSubGroupDescription=query_dict['productSubGroupDescription'],\n", + " calib_level=query_dict['calib_level'])\n", + " files_to_download.extend(filtered_products['dataURI'])\n", + " \n", + "\n", + " # Cull to a unique list of files for each observation type \n", + " # Science roll 1 \n", + " sci_r1_files_to_download = []\n", + " sci_r1_files_to_download = np.unique([i for i in files_to_download if str(program+sci_r1_observtn) in i])\n", + " # Science roll 2 \n", + " sci_r2_files_to_download = []\n", + " sci_r2_files_to_download = np.unique([i for i in files_to_download if str(program+sci_r2_observtn) in i])\n", + " # PSF Reference files\n", + " ref_files_to_download = []\n", + " ref_files_to_download = np.unique([i for i in files_to_download if str(program+ref_observtn) in i])\n", + " # Background files (science assoc.)\n", + " bg_sci_files_to_download = []\n", + " bg_sci_files_to_download = np.unique([i for i in files_to_download if str(program+bg_sci_observtn) in i])\n", + " # Background files (reference assoc.)\n", + " bg_ref_files_to_download = [] \n", + " bg_ref_files_to_download = np.unique([i for i in files_to_download if str(program+bg_ref_observtn) in i])\n", + "\n", + " print(\"Science files selected for downloading: \", len(sci_r1_files_to_download)+len(sci_r1_files_to_download))\n", + " print(\"PSF Reference files selected for downloading: \", len(ref_files_to_download))\n", + " print(\"Background selected for downloading: \", len(bg_sci_files_to_download)+len(bg_ref_files_to_download))" + ] + }, + { + "cell_type": "markdown", + "id": "3d4c0f1b", + "metadata": {}, + "source": [ + "There should be 6 Science files, 11 PSF Reference files and 4 Background files selected for downloading. \n", + "\n", + "\n", + "Download all the uncal files and place them into the appropriate directories.\n", + "\n", + "
\n", + "Warning: If this notebook is halted during this step the downloaded file may be incomplete, and cause crashes later on!\n", + "
" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4dee926e", + "metadata": {}, + "outputs": [], + "source": [ + "if demo_mode:\n", + " #for filename in sci_r1_files_to_download:\n", + " # sci_r1_manifest = Observations.download_file(filename, local_path=os.path.join(uncal_sci_r1_dir, Path(filename).name))\n", + " for filename in sci_r2_files_to_download:\n", + " sci_r2_manifest = Observations.download_file(filename, local_path=os.path.join(uncal_sci_r2_dir, Path(filename).name))\n", + " #for filename in ref_files_to_download:\n", + " # ref_manifest = Observations.download_file(filename, local_path=os.path.join(uncal_ref_dir, Path(filename).name))\n", + " #for filename in bg_sci_files_to_download:\n", + " # bg_manifest = Observations.download_file(filename, local_path=os.path.join(uncal_bg_sci_dir, Path(filename).name))\n", + " #for filename in bg_ref_files_to_download:\n", + " # bg_manifest = Observations.download_file(filename, local_path=os.path.join(uncal_bg_ref_dir, Path(filename).name))" + ] + }, + { + "cell_type": "markdown", + "id": "0da8a852", + "metadata": {}, + "source": [ + "
" + ] + }, + { + "cell_type": "markdown", + "id": "4ae87477", + "metadata": {}, + "source": [ + "4.-Directory Setup\n", + "------------------\n", + "Set up detailed paths to input/output stages here. We will set up individual `stage1/` and `stage2/` sub directories for each observation, but a single `stage3/` directory for the combined [calwebb_coron3 output products](https://jwst-pipeline.readthedocs.io/en/stable/jwst/pipeline/calwebb_coron3.html)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f013ab40", + "metadata": {}, + "outputs": [], + "source": [ + "# Define output subdirectories to keep science data products organized\n", + "# Sci Roll 1\n", + "det1_sci_r1_dir = os.path.join(sci_r1_dir, 'stage1') # calwebb_detector1 pipeline outputs will go here\n", + "image2_sci_r1_dir = os.path.join(sci_r1_dir, 'stage2') # calwebb_image2 pipeline outputs will go here\n", + "\n", + "# Sci Roll 2\n", + "det1_sci_r2_dir = os.path.join(sci_r2_dir, 'stage1') # calwebb_detector1 pipeline outputs will go here\n", + "image2_sci_r2_dir = os.path.join(sci_r2_dir, 'stage2') # calwebb_image2 pipeline outputs will go here\n", + "\n", + "# Define output subdirectories to keep PSF reference data products organized\n", + "det1_ref_dir = os.path.join(ref_dir, 'stage1') # calwebb_detector1 pipeline outputs will go here\n", + "image2_ref_dir = os.path.join(ref_dir, 'stage2') # calwebb_image2 pipeline outputs will go here\n", + "\n", + "# Define output subdirectories to keep background data products organized\n", + "# Sci Bkg\n", + "det1_bg_sci_dir = os.path.join(bg_sci_dir, 'stage1') # calwebb_detector1 pipeline outputs will go here\n", + "image2_bg_sci_dir = os.path.join(bg_sci_dir, 'stage2') # calwebb_image2 pipeline outputs will go here\n", + "\n", + "# Ref Bkg\n", + "det1_bg_ref_dir = os.path.join(bg_ref_dir, 'stage1') # calwebb_detector1 pipeline outputs will go here\n", + "image2_bg_ref_dir = os.path.join(bg_ref_dir, 'stage2') # calwebb_image2 pipeline outputs will go here\n", + "\n", + "# Single stage3 directory for combined coron3 products.\n", + "coron3_dir = os.path.join(basedir, 'stage3')\n", + "\n", + "# We need to check that the desired output directories exist, and if not create them\n", + "det1_dirs = [det1_sci_r1_dir, det1_sci_r2_dir, det1_ref_dir, det1_bg_sci_dir, det1_bg_ref_dir]\n", + "image2_dirs = [image2_sci_r1_dir, image2_sci_r2_dir, image2_ref_dir, image2_bg_sci_dir, image2_bg_ref_dir]\n", + "\n", + "for dir in det1_dirs:\n", + " if not os.path.exists(dir):\n", + " os.makedirs(dir)\n", + "for dir in image2_dirs:\n", + " if not os.path.exists(dir):\n", + " os.makedirs(dir)\n", + "if not os.path.exists(coron3_dir):\n", + " os.makedirs(coron3_dir)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ad3084f5", + "metadata": {}, + "outputs": [], + "source": [ + "# Print out the time benchmark\n", + "time1 = time.perf_counter()\n", + "print(f\"Runtime so far: {time1 - time0:0.4f} seconds\")" + ] + }, + { + "cell_type": "markdown", + "id": "497228f1", + "metadata": {}, + "source": [ + "
" + ] + }, + { + "cell_type": "markdown", + "id": "b6603987-5168-45b4-b527-d1572de46495", + "metadata": {}, + "source": [ + "5.-Detector1 Pipeline\n", + "------------------\n", + "In this section, we process our uncalibrated data through the calwebb_detector1 pipeline to create Stage 1 data products. For coronagraphic exposures, these data products include a `*_rate.fits` file (a 2D countrate product, based on averaging over all integrations in the exposure), but specifically also a `*_rateints.fits` file, a 3D countrate product, that contains the individual results of each integration, wherein 2D countrate images for each integration are stacked along the 3rd axis of the data cubes (ncols x nrows x nints). These data products have units of DN/s.\n", + "\n", + "See https://jwst-docs.stsci.edu/jwst-science-calibration-pipeline/stages-of-jwst-data-processing/calwebb_detector1\n", + "\n", + "By default, all steps in the calwebb_detector1 are run for MIRI except: the [ipc](https://jwst-pipeline.readthedocs.io/en/stable/jwst/ipc/index.html#ipc-step) and [charge_migration](https://jwst-pipeline.readthedocs.io/en/stable/jwst/charge_migration/index.html#charge-migration-step) steps. There are also several steps performed for MIRI data that are not performed for other instruments. These include: [emicorr](https://jwst-pipeline.readthedocs.io/en/latest/jwst/emicorr/index.html#emicorr-step), [firstframe](https://jwst-pipeline.readthedocs.io/en/latest/jwst/firstframe/index.html#firstframe-step), [lastframe](https://jwst-pipeline.readthedocs.io/en/latest/jwst/lastframe/index.html#lastframe-step), [reset](https://jwst-pipeline.readthedocs.io/en/latest/jwst/reset/index.html#reset-step) and [rscd](https://jwst-pipeline.readthedocs.io/en/latest/jwst/rscd/index.html#rscd-step).\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "id": "6c08d81a", + "metadata": {}, + "source": [ + "
\n", + "To override certain steps and reference files, use the examples provided below.
\n", + "E.g., turn on detection of cosmic ray showers.\n", + "
" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "d15c670b", + "metadata": {}, + "outputs": [], + "source": [ + "# Set up a dictionary to define how the Detector1 pipeline should be configured\n", + "\n", + "# Boilerplate dictionary setup\n", + "det1dict = {}\n", + "det1dict['group_scale'], det1dict['dq_init'], det1dict['emicorr'], det1dict['saturation'] = {}, {}, {}, {}\n", + "det1dict['firstframe'], det1dict['lastframe'], det1dict['reset'], det1dict['linearity'], det1dict['rscd'] = {}, {}, {}, {}, {}\n", + "det1dict['dark_current'], det1dict['refpix'], det1dict['jump'], det1dict['ramp_fit'], det1dict['gain_scale'] = {}, {}, {}, {}, {}\n", + "\n", + "# Overrides for whether or not certain steps should be skipped (example)\n", + "# skipping refpix step\n", + "#det1dict['refpix']['skip'] = True\n", + "\n", + "# Overrides for various reference files\n", + "# Files should be in the base local directory or provide full path\n", + "#det1dict['dq_init']['override_mask'] = 'myfile.fits' # Bad pixel mask\n", + "#det1dict['saturation']['override_saturation'] = 'myfile.fits' # Saturation\n", + "#det1dict['reset']['override_reset'] = 'myfile.fits' # Reset\n", + "#det1dict['linearity']['override_linearity'] = 'myfile.fits' # Linearity\n", + "#det1dict['rscd']['override_rscd'] = 'myfile.fits' # RSCD\n", + "#det1dict['dark_current']['override_dark'] = 'myfile.fits' # Dark current subtraction\n", + "#det1dict['jump']['override_gain'] = 'myfile.fits' # Gain used by jump step\n", + "#det1dict['ramp_fit']['override_gain'] = 'myfile.fits' # Gain used by ramp fitting step\n", + "#det1dict['jump']['override_readnoise'] = 'myfile.fits' # Read noise used by jump step\n", + "#det1dict['ramp_fit']['override_readnoise'] = 'myfile.fits' # Read noise used by ramp fitting step\n", + "\n", + "# Turn on multi-core processing (off by default). Choose what fraction of cores to use (quarter, half, or all)\n", + "det1dict['jump']['maximum_cores'] = 'half' \n", + "det1dict['ramp_fit']['maximum_cores'] = 'half'\n", + "\n", + "# Save the frame-averaged dark data created during the dark current subtraction step\n", + "det1dict['dark_current']['dark_output'] = 'dark.fits' # Frame-averaged dark \n", + "\n", + "# Turn on detection of cosmic ray showers (off by default)\n", + "#det1dict['jump']['find_showers'] = True" + ] + }, + { + "cell_type": "markdown", + "id": "6f84c859", + "metadata": {}, + "source": [ + "
\n", + "Below an example of how to insert custom pipeline steps using the\n", + "pre-hook/post-hook framework.\n", + "\n", + "For more information see [Tips and Trick for working with the JWST Pipeline](https://jwst-docs.stsci.edu/jwst-science-calibration-pipeline/tips-and-tricks-for-working-with-the-jwst-pipeline)\n", + "
" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "41f72f68", + "metadata": {}, + "outputs": [], + "source": [ + "# Define a new step called XplyStep that multiplies everything by 1.0\n", + "# I.e., it does nothing, but could be changed to do something more interesting.\n", + "class XplyStep(Step):\n", + " spec = '''\n", + " '''\n", + " class_alias = 'xply'\n", + "\n", + " def process(self, input_data):\n", + " with datamodels.open(input_data) as model:\n", + " result = model.copy()\n", + " sci = result.data\n", + " sci = sci * 1.0\n", + " result.data = sci\n", + " self.log.info('Multiplied everything by one in custom step!')\n", + " return result\n", + "\n", + "\n", + "# And here we'll insert it into our pipeline dictionary to be run at the end right after the gain_scale step\n", + "det1dict['gain_scale']['post_hooks'] = [XplyStep]" + ] + }, + { + "cell_type": "markdown", + "id": "11e52a45", + "metadata": {}, + "source": [ + "### Calibrating Science Files\n", + "Look for input science files and run calwebb_detector1 pipeline using the call method. There should be 2 input science files, one for the observation at roll 1 (Obs 8) and one for the observation at roll 2 (Obs 9)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "acc880b7", + "metadata": {}, + "outputs": [], + "source": [ + "# Look for input files of the form *uncal.fits from the science observation\n", + "sstring1 = os.path.join(uncal_sci_r1_dir, 'jw*mirimage*uncal.fits')\n", + "sstring2 = os.path.join(uncal_sci_r2_dir, 'jw*mirimage*uncal.fits')\n", + "\n", + "uncal_sci_r1_files = np.array(sorted(glob.glob(sstring1)))\n", + "uncal_sci_r2_files = np.array(sorted(glob.glob(sstring2)))\n", + "\n", + "# Check that these are the correct mask/filter to use\n", + "uncal_sci_r1_files = select_mask_filter_files(uncal_sci_r1_files, use_mask, use_filter)\n", + "uncal_sci_r2_files = select_mask_filter_files(uncal_sci_r2_files, use_mask, use_filter)\n", + "\n", + "print('Found ' + str((len(uncal_sci_r1_files)+len(uncal_sci_r2_files))) + ' science input files')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "2d610e7f-0332-4a7a-8017-525da7aaf2f1", + "metadata": {}, + "outputs": [], + "source": [ + "sstring1" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "10e5acc8", + "metadata": { + "scrolled": true + }, + "outputs": [], + "source": [ + "# Run the pipeline on these input files by a simple loop over files using\n", + "# our custom parameter dictionary\n", + "if dodet1:\n", + " #for file in uncal_sci_r1_files:\n", + " # Detector1Pipeline.call(file, steps=det1dict, save_results=True, output_dir=det1_sci_r1_dir)\n", + " for file in uncal_sci_r2_files:\n", + " Detector1Pipeline.call(file, steps=det1dict, save_results=True, output_dir=det1_sci_r2_dir)\n", + "else:\n", + " print('Skipping Detector1 processing for SCI data')" + ] + }, + { + "cell_type": "markdown", + "id": "1dba8c73-ce57-48e4-97c7-39504f777be3", + "metadata": {}, + "source": [ + "### Calibrating PSF Reference Files\n", + "Look for input PSF Reference files and run calwebb_detector1\n", + "pipeline using the call method. \n", + "\n", + "There should be 9 files in total, one for each exposure of the PSF reference target taken in the 9-point dither pattern. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ed8a79ce-e010-4f87-927e-c312a00201a6", + "metadata": {}, + "outputs": [], + "source": [ + "# Now let's look for input files of the form *uncal.fits from the background\n", + "# observations\n", + "sstring = os.path.join(uncal_ref_dir, 'jw*mirimage*uncal.fits')\n", + "uncal_ref_files = np.array(sorted(glob.glob(sstring)))\n", + "# Check that these are the band/channel to use\n", + "uncal_ref_files = select_mask_filter_files(uncal_ref_files, use_mask, use_filter)\n", + "\n", + "print('Found ' + str(len(uncal_ref_files)) + ' PSF reference input files')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "8212570d-f981-4f72-b29a-dbe5f359f3fa", + "metadata": {}, + "outputs": [], + "source": [ + "# Run the pipeline on these input files by a simple loop over files using\n", + "# our custom parameter dictionary\n", + "if dodet1:\n", + " for file in uncal_ref_files:\n", + " Detector1Pipeline.call(file, steps=det1dict, save_results=True, output_dir=det1_ref_dir)\n", + "else:\n", + " print('Skipping Detector1 processing for PSF reference data')" + ] + }, + { + "cell_type": "markdown", + "id": "b411a0d7", + "metadata": {}, + "source": [ + "### Calibrating Background Files\n", + "Look for input background files and run calwebb_detector1\n", + "pipeline using the call method. \n", + "\n", + "There should be 4 background files in total: two exposures of the background target associated with the science target (taken in the 2-point dither) and two exposures of the background target associated with the PSF reference target (taken in the 2-point dither)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "3cc3eef4", + "metadata": {}, + "outputs": [], + "source": [ + "# Look for input files of the form *uncal.fits from the background\n", + "# observations\n", + "sstring1 = os.path.join(uncal_bg_sci_dir, 'jw*mirimage*uncal.fits')\n", + "sstring2 = os.path.join(uncal_bg_ref_dir, 'jw*mirimage*uncal.fits')\n", + "\n", + "uncal_bg_sci_files = np.array(sorted(glob.glob(sstring1)))\n", + "uncal_bg_ref_files = np.array(sorted(glob.glob(sstring2)))\n", + "\n", + "# Check that these are the filter to use\n", + "uncal_bg_sci_files = select_mask_filter_files(uncal_bg_sci_files, use_mask, use_filter)\n", + "uncal_bg_ref_files = select_mask_filter_files(uncal_bg_ref_files, use_mask, use_filter)\n", + "\n", + "print('Found ' + str((len(uncal_bg_sci_files)+len(uncal_bg_ref_files))) + ' background input files')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "9ecb2c84", + "metadata": { + "scrolled": true + }, + "outputs": [], + "source": [ + "# Run the pipeline on these input files by a simple loop over files using\n", + "# our custom parameter dictionary\n", + "if dodet1bg:\n", + " for file in uncal_bg_sci_files:\n", + " Detector1Pipeline.call(file, steps=det1dict, save_results=True, output_dir=det1_bg_sci_dir)\n", + " for file in uncal_bg_ref_files:\n", + " Detector1Pipeline.call(file, steps=det1dict, save_results=True, output_dir=det1_bg_ref_dir)\n", + "else:\n", + " print('Skipping Detector1 processing for BG data')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a2d984a0", + "metadata": {}, + "outputs": [], + "source": [ + "# Print out the time benchmark\n", + "time1 = time.perf_counter()\n", + "print(f\"Runtime so far: {time1 - time0:0.4f} seconds\")" + ] + }, + { + "cell_type": "markdown", + "id": "41541b35", + "metadata": {}, + "source": [ + "
" + ] + }, + { + "cell_type": "markdown", + "id": "81419572", + "metadata": {}, + "source": [ + "6.-Image2 Pipeline\n", + "------------------\n", + "\n", + "In this section we process our 3D countrate (`rateints`) products from\n", + "Stage 1 (calwebb_detector1) through the Image2 (calwebb_image2) pipeline\n", + "in order to produce Stage 2\n", + "data products (i.e., 3D calibrated `calints` and 3D background-subtracted `bsubints` data). These data products have units of MJy/sr.\n", + "\n", + "In this pipeline processing stage, the [background subtraction](https://jwst-pipeline.readthedocs.io/en/latest/jwst/background_step/index.html#background-step)\n", + "step is performed (if the data has a dedicated background defined), the [world coordinate system (WCS)](https://jwst-pipeline.readthedocs.io/en/latest/jwst/assign_wcs/index.html#assign-wcs-step)\n", + "is assigned, the data is [flat fielded](https://jwst-pipeline.readthedocs.io/en/latest/jwst/flatfield/index.html#flatfield-step),\n", + "and a [photometric calibration](https://jwst-pipeline.readthedocs.io/en/latest/jwst/photom/index.html#photom-step)\n", + "is applied to convert from units of countrate (ADU/s) to surface brightness (MJy/sr). \n", + "\n", + "The [resampling](https://jwst-pipeline.readthedocs.io/en/latest/jwst/resample/index.html#resample-step) step is performed, to create resampled images of each dither position, but this is only a quick-look product. The resampling step occurs during the Coron3 stage by default. While the resampling step is done in the Image2 stage, the data quality from the Coron3 stage will be better since the bad pixels, which adversely affect both the centroids and photometry in individual images, will be mostly removed.\n", + "\n", + "See https://jwst-docs.stsci.edu/jwst-science-calibration-pipeline/stages-of-jwst-data-processing/calwebb_image2" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c120d056-7135-4977-a48e-0c69306ef0be", + "metadata": {}, + "outputs": [], + "source": [ + "time_image2 = time.perf_counter()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "5cdbbe04", + "metadata": {}, + "outputs": [], + "source": [ + "# Set up a dictionary to define how the Image2 pipeline should be configured.\n", + "\n", + "# Boilerplate dictionary setup\n", + "image2dict = {}\n", + "image2dict['assign_wcs'], image2dict['bkg_subtract'], image2dict['flat_field'], image2dict['photom'], image2dict['resample'] = {}, {}, {}, {}, {}\n", + "\n", + "# Overrides for whether or not certain steps should be skipped (example)\n", + "#image2dict['resample']['skip'] = False\n", + "#image2dict['bkg_subtract']['skip'] = True\n", + "\n", + "# Overrides for various reference files\n", + "# Files should be in the base local directory or provide full path\n", + "#image2dict['assign_wcs']['override_distortion'] = 'myfile.asdf' # Spatial distortion (ASDF file)\n", + "#image2dict['assign_wcs']['override_filteroffset'] = 'myfile.asdf' # Imager filter offsets (ASDF file)\n", + "#image2dict['assign_wcs']['override_specwcs'] = 'myfile.asdf' # Spectral distortion (ASDF file)\n", + "#image2dict['assign_wcs']['override_wavelengthrange'] = 'myfile.asdf' # Wavelength channel mapping (ASDF file)\n", + "#image2dict['flat_field']['override_flat'] = 'myfile.fits' # Pixel flatfield\n", + "#image2dict['photom']['override_photom'] = 'myfile.fits' # Photometric calibration array\n", + "\n", + "# Save the combined background used for subtraction\n", + "image2dict['bkg_subtract']['save_combined_background'] = True \n", + "\n", + "# Relevant step-specific arguments for background subtraction\n", + "#image2dict['bkg_subtract']['sigma'] = 3.0 # Number of standard deviations to use for sigma-clipping\n", + "#image2dict['bkg_subtract']['maxiters'] = None # Number of clipping iterations to perform when combining multiple background images. If None, will clip until convergence is achieved\n", + "\n", + "# Relevant step-specific arguments for flat field\n", + "#image2dict['flat_field']['user_supplied_flat'] = 'myfile.fits' # Path to user-supplied Flat-field image \n", + "#image2dict['flat_field']['inverse'] = False # Whether to inverse the math operations used to apply the Flat-field (i.e. multiply instead of divide)\n", + "\n", + "# Overrides for various reference files\n", + "# Files should be in the base local directory or provide full path\n", + "#image2dict['assign_wcs']['override_distortion'] = 'myfile.asdf' # Spatial distortion (ASDF file)\n", + "#image2dict['assign_wcs']['override_filteroffset'] = 'myfile.asdf' # Imager filter offsets (ASDF file)\n", + "#image2dict['flat_field']['override_flat'] = 'myfile.fits' # Pixel flatfield\n", + "#image2dict['photom']['override_photom'] = 'myfile.fits' # Photometric calibration array" + ] + }, + { + "cell_type": "markdown", + "id": "1f534112", + "metadata": {}, + "source": [ + "Define a function to create association files for Stage 2. This will enable use of the background subtraction, if chosen above. \n", + "\n", + "\n", + "
\n", + "Note that the background will not be applied properly to all files if more than *one* SCI file is included in the association.\n", + "
" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "aaa58644-c300-47fa-a4ba-99b83fe65cea", + "metadata": {}, + "outputs": [], + "source": [ + "def writel2asn(onescifile, bgfiles, asnfile, prodname):\n", + " # Define the basic association of science files\n", + " asn = afl.asn_from_list([onescifile], rule=DMSLevel2bBase, product_name=prodname) # Wrap in array since input was single exposure\n", + "\n", + " #Coron/filter configuration for this sci file\n", + " with fits.open(onescifile) as hdu:\n", + " hdu.verify()\n", + " hdr = hdu[0].header\n", + " this_mask, this_filter = hdr['CORONMSK'], hdr['FILTER']\n", + "\n", + " # Find which background files are appropriate to this mask/filter and add to association\n", + " for file in bgfiles:\n", + " hdu.verify()\n", + " hdr = hdu[0].header\n", + " if hdr['FILTER'] == this_filter:\n", + " asn['products'][0]['members'].append({'expname': file, 'exptype': 'background'})\n", + "\n", + " # Write the association to a json file\n", + " _, serialized = asn.dump()\n", + " with open(asnfile, 'w') as outfile:\n", + " outfile.write(serialized)" + ] + }, + { + "cell_type": "markdown", + "id": "a893f72c", + "metadata": {}, + "source": [ + "Find and sort all of the input files, ensuring use of absolute paths. \n", + "\n", + "The input files should be `rateints.fits` products and there should be a total of 2 files corresponding to the science target; 9 files corresponding to the reference target; 2 files corresponding to the science background target and 2 files corresponding to the reference background target." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "2f1f4c38-31ea-4ddf-a660-5c77b9436f29", + "metadata": {}, + "outputs": [], + "source": [ + "# Science Files \n", + "# Roll 1\n", + "sstring = os.path.join(det1_sci_r1_dir, 'jw*mirimage*rateints.fits') # Use files from the detector1 output folder\n", + "sci_r1_files = sorted(glob.glob(sstring))\n", + "for ii in range(0, len(sci_r1_files)):\n", + " sci_r1_files[ii] = os.path.abspath(sci_r1_files[ii])\n", + "sci_r1_files = np.array(sci_r1_files)\n", + "# Check that these are the mask/filter to use\n", + "sci_r1_files = select_mask_filter_files(sci_r1_files, use_mask, use_filter)\n", + "# Roll 2\n", + "sstring = os.path.join(det1_sci_r2_dir, 'jw*mirimage*rateints.fits') # Use files from the detector1 output folder\n", + "sci_r2_files = sorted(glob.glob(sstring))\n", + "for ii in range(0, len(sci_r2_files)):\n", + " sci_r2_files[ii] = os.path.abspath(sci_r2_files[ii])\n", + "sci_r2_files = np.array(sci_r2_files)\n", + "sci_r2_files = select_mask_filter_files(sci_r2_files, use_mask, use_filter)\n", + "\n", + "# PSF Ref Files\n", + "sstring = os.path.join(det1_ref_dir, 'jw*mirimage*rateints.fits')\n", + "ref_files = sorted(glob.glob(sstring))\n", + "for ii in range(0, len(ref_files)):\n", + " ref_files[ii] = os.path.abspath(ref_files[ii])\n", + "ref_files = np.array(ref_files)\n", + "ref_files = select_mask_filter_files(ref_files, use_mask, use_filter)\n", + "\n", + "# Background Files\n", + "# Sci Bkg\n", + "sstring = os.path.join(det1_bg_sci_dir, 'jw*mirimage*rateints.fits')\n", + "bg_sci_files = sorted(glob.glob(sstring))\n", + "for ii in range(0, len(bg_sci_files)):\n", + " bg_sci_files[ii] = os.path.abspath(bg_sci_files[ii])\n", + "bg_sci_files = np.array(bg_sci_files)\n", + "bg_sci_files = select_mask_filter_files(bg_sci_files, use_mask, use_filter)\n", + "\n", + "# Ref Bkg \n", + "sstring = os.path.join(det1_bg_ref_dir, 'jw*mirimage*rateints.fits')\n", + "bg_ref_files = sorted(glob.glob(sstring))\n", + "for ii in range(0, len(bg_ref_files)):\n", + " bg_ref_files[ii] = os.path.abspath(bg_ref_files[ii])\n", + "bg_ref_files = np.array(bg_ref_files)\n", + "# Check that these are the mask/filter to use\n", + "bg_ref_files = select_mask_filter_files(bg_ref_files, use_mask, use_filter)\n", + "\n", + "print('Found ' + str(len(sci_r1_files) + len(sci_r2_files)) + ' science files')\n", + "print('Found ' + str(len(ref_files)) + ' reference files')\n", + "print('Found ' + str(len(bg_sci_files)) + ' science background files')\n", + "print('Found ' + str(len(bg_ref_files)) + ' reference background files')" + ] + }, + { + "cell_type": "markdown", + "id": "cbefec2c", + "metadata": {}, + "source": [ + "Step through each of the science files, using relevant associated backgrounds in calwebb_image2 processing." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "50aa076d", + "metadata": {}, + "outputs": [], + "source": [ + "if doimage2:\n", + " # Science Roll 1\n", + " # Generate a proper background-subtracting association file\n", + " #for file in sci_r1_files:\n", + " # asnfile = os.path.join(image2_sci_r1_dir, 'l2asn.json')\n", + " # writel2asn(file, bg_sci_files, asnfile, 'Level2')\n", + " # Image2Pipeline.call(asnfile, steps=image2dict, save_bsub=True, save_results=True, output_dir=image2_sci_r1_dir)\n", + " # Science Roll 2\n", + " # Generate a proper background-subtracting association file\n", + " for file in sci_r2_files:\n", + " asnfile = os.path.join(image2_sci_r2_dir, 'l2asn.json')\n", + " writel2asn(file, bg_sci_files, asnfile, 'Level2')\n", + " Image2Pipeline.call(asnfile, steps=image2dict, save_bsub=True, save_results=True, output_dir=image2_sci_r2_dir)\n", + "else:\n", + " print('Skipping Image2 processing for SCI data')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "90845487-f432-46cc-b1df-048e6e419ff1", + "metadata": {}, + "outputs": [], + "source": [ + "if doimage2:\n", + " for file in ref_files:\n", + " # Extract the dither number to use in asn filename\n", + " match = re.compile(r'(\\d{5})_mirimage').search(file)\n", + " # Generate a proper background-subtracting association file\n", + " asnfile = os.path.join(image2_ref_dir, match.group(1)+'_l2asn.json')\n", + " writel2asn(file, bg_ref_files, asnfile, 'Level2')\n", + " Image2Pipeline.call(asnfile, steps=image2dict, save_bsub=True, save_results=True, output_dir=image2_ref_dir) \n", + "else:\n", + " print('Skipping Image2 processing for PSF REF data')" + ] + }, + { + "cell_type": "markdown", + "id": "224e3bd3", + "metadata": {}, + "source": [ + "Reduce the backgrounds individually." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ccc3199a", + "metadata": {}, + "outputs": [], + "source": [ + "if doimage2bg:\n", + " for file in bg_sci_files:\n", + " Image2Pipeline.call(file, steps=image2dict, save_results=True, output_dir=image2_bg_sci_dir)\n", + " for file in bg_ref_files:\n", + " Image2Pipeline.call(file, steps=image2dict, save_results=True, output_dir=image2_bg_ref_dir)\n", + "else:\n", + " print('Skipping Image2 processing for BG data')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "fa83c83d", + "metadata": {}, + "outputs": [], + "source": [ + "# Print out the time benchmark\n", + "time1 = time.perf_counter()\n", + "print(f\"Runtime so far: {time1 - time0:0.4f} seconds\")\n", + "print(f\"Runtime for Image2: {time1 - time_image2} seconds\")" + ] + }, + { + "cell_type": "markdown", + "id": "3b3c6415", + "metadata": {}, + "source": [ + "
" + ] + }, + { + "cell_type": "markdown", + "id": "64771f88", + "metadata": {}, + "source": [ + "7.-Coron3 Pipeline\n", + "------------------\n", + "In this section, we'll run the Coron3 (calwebb_coron3) pipeline on the calibrated MIRI coronagraphic exposures to produce PSF-subtracted, resampled, combined images of the source object. The input to calwebb_coron3 must be in the form of an association file that lists one or more exposures of a science target and one or more reference PSF targets. The individual target and reference PSF exposures should be in the form of 3D photometrically calibrated (`_calints`) products from calwebb_image2 processing. Each pipeline step will loop over the 3D stack of per-integration images contained in each exposure. The relevant steps are:\n", + "\n", + "- [outlier_detection](https://jwst-pipeline.readthedocs.io/en/latest/jwst/outlier_detection/index.html#outlier-detection-step): CR-flag all PSF and science target exposures\n", + "- [stack_refs](https://jwst-pipeline.readthedocs.io/en/latest/jwst/stack_refs/index.html#stack-refs-step): Reference PSF stacking \n", + "- [align_refs](https://jwst-pipeline.readthedocs.io/en/latest/jwst/align_refs/index.html#align-refs-step): Reference PSF alignment\n", + "- [klip](https://jwst-pipeline.readthedocs.io/en/latest/jwst/klip/index.html#klip-step): PSF subtraction with the KLIP algorithm\n", + "- [resample](https://jwst-pipeline.readthedocs.io/en/latest/jwst/resample/index.html#resample-step): Image resampling and World Coordinate System registration\n", + "\n", + "See https://jwst-docs.stsci.edu/jwst-science-calibration-pipeline/stages-of-jwst-data-processing/calwebb_coron3\n", + " " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "2e3b245a-070d-4e86-ad22-5c62ee35ce0e", + "metadata": {}, + "outputs": [], + "source": [ + "time_coron3 = time.perf_counter()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "0f41afe2", + "metadata": {}, + "outputs": [], + "source": [ + "# Set up a dictionary to define how the Coron3 pipeline should be configured\n", + "\n", + "# Boilerplate dictionary setup\n", + "coron3dict = {}\n", + "coron3dict['outlier_detection'], coron3dict['stack_refs'], coron3dict['align_refs'], coron3dict['klip'], coron3dict['resample'] = {}, {}, {}, {}, {}\n", + "\n", + "# Set the maximum number of KL transform rows to keep when computing the PSF fit to the target.\n", + "coron3dict['klip']['truncate'] = 25 # The maximum number of KL modes to use.\n", + "\n", + "# Overrides for various reference files\n", + "# Files should be in the base local directory or provide full path\n", + "#coron3dict['align_refs']['override_psfmask'] = 'myfile.fits' # The PSFMASK reference file \n", + "\n", + "# Options for adjusting performance for the outlier detection step\n", + "#coron3dict['outlier_detection']['kernel_size'] = '7 7' # Dial this to adjust the detector kernel size\n", + "#coron3dict['outlier_detection']['threshold_percent'] = 99.8 # Dial this to be more/less aggressive in outlier flagging (values closer to 100% are less aggressive)\n", + "\n", + "# Options for adjusting the resample step\n", + "#coron3dict['resample']['pixfrac'] = 1.0 # Fraction by which input pixels are “shrunk” before being drizzled onto the output image grid \n", + "#coron3dict['resample']['kernel'] = 'square' # Kernel form used to distribute flux onto the output image \n", + "#coron3dict['resample']['fillval'] = 'INDEF' # Value to assign to output pixels that have zero weight or do not receive any flux from any input pixels during drizzling\n", + "#coron3dict['resample']['weight_type'] = 'ivm' # Weighting type for each input image. \n", + "#coron3dict['resample']['output_shape'] = None # \n", + "#coron3dict['resample']['crpix'] = None\n", + "#coron3dict['resample']['crval'] = None\n", + "#coron3dict['resample']['rotation'] = None\n", + "#coron3dict['resample']['pixel_scale_ratio'] = 1.0\n", + "#coron3dict['resample']['pixel_scale'] = None\n", + "#coron3dict['resample']['output_wcs'] = ''\n", + "#coron3dict['resample']['single'] = False\n", + "#coron3dict['resample']['blendheaders'] = True " + ] + }, + { + "cell_type": "markdown", + "id": "86b1952f", + "metadata": {}, + "source": [ + "Define a function to create association files for Stage 3. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "683839d2", + "metadata": {}, + "outputs": [], + "source": [ + "def writel3asn(scifiles, reffiles, asnfile, prodname):\n", + " \"\"\"Create an association from a list of science exposures and a list of PSF reference exposures, \n", + " intended for calwebb_coron3 processing.\n", + " \n", + " Parameters\n", + " ----------\n", + " scifiles : list\n", + " List of science files\n", + " reffiles : list\n", + " List of reference files\n", + " asnfile : str\n", + " The path to the association file.\n", + " \"\"\"\n", + " # Define the basic association of science files\n", + " asn = afl.asn_from_list(scifiles, rule=DMS_Level3_Base, product_name=prodname)\n", + "\n", + " # Add reference files to the association\n", + " nref = len(reffiles)\n", + " for ii in range(0, nref):\n", + " asn['products'][0]['members'].append({'expname': reffiles[ii], 'exptype': 'psf'})\n", + "\n", + " # Write the association to a json file\n", + " _, serialized = asn.dump()\n", + " with open(asnfile, 'w') as outfile:\n", + " outfile.write(serialized)" + ] + }, + { + "cell_type": "markdown", + "id": "55cb1e41", + "metadata": {}, + "source": [ + "Find and sort all of the input files, ensuring use of absolute paths. There should be 2 science files and 9 PSF reference files." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f519d512", + "metadata": {}, + "outputs": [], + "source": [ + "# Science Files need the calints.fits files\n", + "sstring = os.path.join(image2_sci_r1_dir, 'jw*mirimage*calints.fits')\n", + "sstring2 = os.path.join(image2_sci_r2_dir, 'jw*mirimage*calints.fits')\n", + "r1_calfiles = sorted(glob.glob(sstring))\n", + "r2_calfiles = sorted(glob.glob(sstring2))\n", + "calfiles = r1_calfiles + r2_calfiles\n", + "for ii in range(0, len(calfiles)):\n", + " calfiles[ii] = os.path.abspath(calfiles[ii])\n", + "calfiles = np.array(calfiles)\n", + "# Check that these are the mask/filter to use\n", + "calfiles = select_mask_filter_files(calfiles, use_mask, use_filter)\n", + "\n", + "# Reference Files need the calints.fits files\n", + "sstring = os.path.join(image2_ref_dir, 'jw*mirimage*calints.fits')\n", + "reffiles = sorted(glob.glob(sstring))\n", + "for ii in range(0, len(reffiles)):\n", + " reffiles[ii] = os.path.abspath(reffiles[ii])\n", + "reffiles = np.array(reffiles)\n", + "# Check that these are the mask/filter to use\n", + "reffiles = select_mask_filter_files(reffiles, use_mask, use_filter)\n", + "\n", + "print('Found ' + str(len(calfiles)) + ' science files to process')\n", + "print('Found ' + str(len(reffiles)) + ' reference PSF files to process')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "0e932d7a-d0b5-4148-a641-2cc019ae9e51", + "metadata": {}, + "outputs": [], + "source": [ + "\n", + "calfiles" + ] + }, + { + "cell_type": "markdown", + "id": "fad933a4", + "metadata": {}, + "source": [ + "Make an association file that includes all of the different exposures. If using Master Background subtraction include the background data.\n", + "\n", + "
\n", + "Note that science data must be of type cal.fits and background exposures must be of type x1d.fits\n", + "
" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a48a7a97-c5c4-42cc-badf-aace8454c2f2", + "metadata": {}, + "outputs": [], + "source": [ + "asnfile" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cc9918b7", + "metadata": {}, + "outputs": [], + "source": [ + "if docoron3:\n", + " asnfile = os.path.join(coron3_dir, 'l3asn.json')\n", + " writel3asn(calfiles, reffiles, asnfile, 'Level 3')\n", + " Coron3Pipeline.call(asnfile, steps=coron3dict, save_results=True, output_dir=coron3_dir)\n", + "else:\n", + " print('Skipping coron3 processing')" + ] + }, + { + "cell_type": "markdown", + "id": "280528b8", + "metadata": {}, + "source": [ + "Run calwebb_image3 using the call method." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "8e22779a", + "metadata": {}, + "outputs": [], + "source": [ + "# Print out the time benchmark\n", + "time1 = time.perf_counter()\n", + "print(f\"Runtime so far: {time1 - time0:0.4f} seconds\")\n", + "print(f\"Runtime for coron3: {time1 - time_coron3} seconds\")" + ] + }, + { + "cell_type": "markdown", + "id": "1426db95", + "metadata": {}, + "source": [ + "
" + ] + }, + { + "cell_type": "markdown", + "id": "2966d5ff", + "metadata": {}, + "source": [ + "8.-Examine the output\n", + "------------------\n", + "Here we'll plot the spectra to see what our source looks like." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "9eda58c2-bdc0-42ad-a6b3-addb962cd3e9", + "metadata": {}, + "outputs": [], + "source": [ + "imgs = {'roll1': datamodels.open(\"./miri_coro_demo_data/stage3/jw01386008001_04101_00001_mirimage_a3001_psfsub.fits\").data.copy(),\n", + " 'roll2': datamodels.open(\"./miri_coro_demo_data/stage3/jw01386009001_04101_00001_mirimage_a3001_psfsub.fits\").data.copy(),\n", + " 'combo': datamodels.open(\"./miri_coro_demo_data/stage3/Level 3_i2d.fits\").data.copy()}\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cbf94f08", + "metadata": {}, + "outputs": [], + "source": [ + "fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(16, 8))\n", + "vmin, vmax = np.nanquantile(np.concatenate(list([i.ravel() for i in imgs.values()])), [0.05, 0.95])\n", + "for i, roll in enumerate(imgs.keys()):\n", + " img = imgs[roll]\n", + " while img.ndim > 2:\n", + " img = np.nanmean(img, axis=0)\n", + " ax = axes[i]\n", + " ax.set_title(roll)\n", + " ax.imshow(img, vmin=vmin, vmax=vmax)" + ] + }, + { + "cell_type": "markdown", + "id": "d722f16f-7b47-4d4b-b594-b63734711df9", + "metadata": {}, + "source": [ + "### Overlay sky coordinates\n", + "\n", + "Overlay the RA and Dec grid over the combined rolls" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "11f930f9-39c9-4cfc-9436-3fe44a1b2d6f", + "metadata": {}, + "outputs": [], + "source": [ + "with fits.open(\"./miri_coro_demo_data/stage3/Level 3_i2d.fits\") as f:\n", + " wcs = WCS(f[1].header)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "60cf4e5a-5bac-4d7a-a336-ee8a0d99b2f0", + "metadata": {}, + "outputs": [], + "source": [ + "# The star coordinates at the time of observation are in the header\n", + "exp_file = uncal_sci_r1_files[0]\n", + "targ_ra = fits.getval(exp_file, 'TARG_RA', 0)\n", + "targ_dec = fits.getval(exp_file, 'TARG_DEC', 0)\n", + "starcoord = SkyCoord(targ_ra, targ_dec, unit='deg', frame='icrs')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "7e06cbc9-3458-4b48-89b8-2028e29c5c8f", + "metadata": {}, + "outputs": [], + "source": [ + "fig, ax = plt.subplots(1, 1, subplot_kw={'projection':wcs})\n", + "vmin, vmax = np.nanquantile(imgs['combo'], [0.01, 0.99])\n", + "ax.imshow(imgs['combo'], vmin=vmin, vmax=vmax)\n", + "ax.scatter(*wcs.world_to_pixel(starcoord),\n", + " marker='x', s=100, c='w')\n", + "ax.grid(True)" + ] + }, + { + "cell_type": "markdown", + "id": "d55e2553", + "metadata": {}, + "source": [ + "
" + ] + }, + { + "cell_type": "markdown", + "id": "da6d16ec", + "metadata": {}, + "source": [ + "\"stsci_logo\" " + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.10" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/notebooks/MIRI/Coronagraphy/requirements.txt b/notebooks/MIRI/Coronagraphy/requirements.txt new file mode 100644 index 0000000..9c4a9c9 --- /dev/null +++ b/notebooks/MIRI/Coronagraphy/requirements.txt @@ -0,0 +1,5 @@ +numpy<2.0 +jwst==1.15.1 +astroquery +jupyter +gwcs==0.21.0 \ No newline at end of file From 216b5f9d2753ea4a303af7386907ded6bd98ad51 Mon Sep 17 00:00:00 2001 From: Bryony Nickson Date: Thu, 6 Feb 2025 17:47:18 -0500 Subject: [PATCH 2/4] Removing incorrect warning block --- .../Coronagraphy/JWPipeNB-MIRI-Coron.ipynb | 27 +------------------ 1 file changed, 1 insertion(+), 26 deletions(-) diff --git a/notebooks/MIRI/Coronagraphy/JWPipeNB-MIRI-Coron.ipynb b/notebooks/MIRI/Coronagraphy/JWPipeNB-MIRI-Coron.ipynb index d60fc67..a2a2f6f 100644 --- a/notebooks/MIRI/Coronagraphy/JWPipeNB-MIRI-Coron.ipynb +++ b/notebooks/MIRI/Coronagraphy/JWPipeNB-MIRI-Coron.ipynb @@ -1360,37 +1360,12 @@ "print('Found ' + str(len(reffiles)) + ' reference PSF files to process')" ] }, - { - "cell_type": "code", - "execution_count": null, - "id": "0e932d7a-d0b5-4148-a641-2cc019ae9e51", - "metadata": {}, - "outputs": [], - "source": [ - "\n", - "calfiles" - ] - }, { "cell_type": "markdown", "id": "fad933a4", "metadata": {}, "source": [ - "Make an association file that includes all of the different exposures. If using Master Background subtraction include the background data.\n", - "\n", - "
\n", - "Note that science data must be of type cal.fits and background exposures must be of type x1d.fits\n", - "
" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "a48a7a97-c5c4-42cc-badf-aace8454c2f2", - "metadata": {}, - "outputs": [], - "source": [ - "asnfile" + "Make an association file that includes all of the different exposures. If using Master Background subtraction include the background data." ] }, { From 84943c69682ccb387bac5c680317995156158376 Mon Sep 17 00:00:00 2001 From: Bryony Nickson Date: Thu, 6 Feb 2025 17:53:14 -0500 Subject: [PATCH 3/4] Updating text to reflect correct pipeline build (11.2) --- .../Coronagraphy/JWPipeNB-MIRI-Coron.ipynb | 65 +++++++++++++++---- 1 file changed, 52 insertions(+), 13 deletions(-) diff --git a/notebooks/MIRI/Coronagraphy/JWPipeNB-MIRI-Coron.ipynb b/notebooks/MIRI/Coronagraphy/JWPipeNB-MIRI-Coron.ipynb index a2a2f6f..9a8fc23 100644 --- a/notebooks/MIRI/Coronagraphy/JWPipeNB-MIRI-Coron.ipynb +++ b/notebooks/MIRI/Coronagraphy/JWPipeNB-MIRI-Coron.ipynb @@ -23,8 +23,8 @@ "metadata": {}, "source": [ "**Authors**: B. Nickson; MIRI branch
\n", - "**Last Updated**: Jan 28, 2024
\n", - "**Pipeline Version**: 1.14.1 (Build 10.2)" + "**Last Updated**: Feb 6, 2024
\n", + "**Pipeline Version**: 1.17.1 (Build 11.2)" ] }, { @@ -57,7 +57,7 @@ "[https://github.com/spacetelescope/jwst-pipeline-notebooks/](https://github.com/spacetelescope/jwst-pipeline-notebooks/)\n", "\n", "**Recent Changes**:
\n", - "Jan 28, 2025: Migrate from the `Coronagraphy_ExambleNB` notebook, update to Build 11.0 (jwst 1.15.1)." + "Jan 28, 2025: Migrate from the `Coronagraphy_ExambleNB` notebook, update to Build 11.2 (jwst 1.17.1)." ] }, { @@ -111,7 +111,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 126, "id": "cad6387d", "metadata": {}, "outputs": [], @@ -141,10 +141,18 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 127, "id": "b2f6fd5e", "metadata": {}, - "outputs": [], + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Running in demonstration mode using online example data!\n" + ] + } + ], "source": [ "# Set parameters for demo_mode, mask, filter, data mode directories, and \n", "# processing steps.\n", @@ -214,10 +222,19 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 128, "id": "c2c53535", "metadata": {}, - "outputs": [], + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "CRDS local filepath: /home/bnickson/crds\n", + "CRDS file server: https://jwst-crds.stsci.edu\n" + ] + } + ], "source": [ "# ------------------------Set CRDS context and paths----------------------\n", "# Each version of the calibration pipeline is associated with a specific CRDS\n", @@ -262,10 +279,23 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 129, "id": "61e3464a", "metadata": {}, - "outputs": [], + "outputs": [ + { + "data": { + "text/html": [ + "" + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], "source": [ "# Use the entire available screen width for this notebook\n", "from IPython.display import display, HTML\n", @@ -274,7 +304,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 130, "id": "c7191bfd", "metadata": {}, "outputs": [], @@ -306,10 +336,19 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 131, "id": "59fdfe7e", "metadata": {}, - "outputs": [], + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "JWST Calibration Pipeline Version = 1.17.1\n", + "Using CRDS Context = jwst_1322.pmap\n" + ] + } + ], "source": [ "# --------------JWST Calibration Pipeline Imports---------------------------\n", "# Import the base JWST and calibration reference data packages\n", From 2519ae3c00517d784cdaee4041b46552870eef0f Mon Sep 17 00:00:00 2001 From: Rosa Diaz Date: Fri, 14 Mar 2025 11:35:31 -0400 Subject: [PATCH 4/4] First technical review --- .../Coronagraphy/JWPipeNB-MIRI-Coron.ipynb | 520 +++++++++--------- notebooks/MIRI/Coronagraphy/requirements.txt | 6 +- 2 files changed, 266 insertions(+), 260 deletions(-) diff --git a/notebooks/MIRI/Coronagraphy/JWPipeNB-MIRI-Coron.ipynb b/notebooks/MIRI/Coronagraphy/JWPipeNB-MIRI-Coron.ipynb index 9a8fc23..8835af2 100644 --- a/notebooks/MIRI/Coronagraphy/JWPipeNB-MIRI-Coron.ipynb +++ b/notebooks/MIRI/Coronagraphy/JWPipeNB-MIRI-Coron.ipynb @@ -48,11 +48,13 @@ "Example input data to use will be downloaded automatically unless disabled (i.e., to use local files instead).\n", "\n", "\n", - "**JWST pipeline version and CRDS context**\n", - "This notebook was written for the calibration pipeline version given above and uses the context associated with this version of the JWST Calibration Pipeline. Information about this and other contexts can be found in the JWST Calibration Reference Data System (CRDS) [server]((https://jwst-crds.stsci.edu/)). If you use different pipeline\n", - "versions, please refer to the table [here](https://jwst-crds.stsci.edu/display_build_contexts/) to determine what context to use. To learn more about the differences in the pipeline, read the relevant [documentation](https://jwst-docs.stsci.edu/jwst-science-calibration-pipeline/jwst-operations-pipeline-build-information).\n", + "**[JWST pipeline version and CRDS context](#Set-CRDS-Context-and-Server)**:
\n", + "This notebook was written for the above-specified pipeline version and associated build context for this version of the JWST Calibration Pipeline. Information about this and other contexts can be found in the JWST Calibration Reference Data System (CRDS [server](https://jwst-crds.stsci.edu/)). If you use different pipeline versions, please refer to the table [here](https://jwst-crds.stsci.edu/display_build_contexts/) to determine what context to use. To learn more about the differences for the pipeline, read the relevant [documentation](https://jwst-docs.stsci.edu/jwst-science-calibration-pipeline/jwst-operations-pipeline-build-information#references).
\n", "\n", - "**Updates**:\n", + "Please note that pipeline software development is a continuous process, so results in some cases may be slightly different if a subsequent version is used. **For optimal results, users are strongly encouraged to reprocess their data using the most recent pipeline version and [associated CRDS context](https://jwst-crds.stsci.edu/display_build_contexts/), taking advantage of bug fixes and algorithm improvements.**\n", + "Any [known issues](https://jwst-docs.stsci.edu/known-issues-with-jwst-data/nirspec-known-issues/nirspec-mos-known-issues#NIRSpecFSKnownIssues-Resamplingof2-Dspectra&gsc.tab=0:~:text=MOS%20Known%20Issues-,NIRSpec%20MOS%20Known%20Issues,-Known%20issues%20specific) for this build are noted in the notebook. \n", + "\n", + "**Updates**:
\n", "This notebook is regularly updated as improvements are made to the pipeline. Find the most up to date version of this notebook at:\n", "[https://github.com/spacetelescope/jwst-pipeline-notebooks/](https://github.com/spacetelescope/jwst-pipeline-notebooks/)\n", "\n", @@ -98,7 +100,7 @@ "id": "bae53dc6", "metadata": {}, "source": [ - "1.-Configuration\n", + "## 1.-Configuration\n", "------------------\n", "Set basic parameters to use with this notebook. These will affect what data is used, where data is located (if already in disk), and pipeline modules run on this data. The list of parameters are as follows:\n", "\n", @@ -111,7 +113,7 @@ }, { "cell_type": "code", - "execution_count": 126, + "execution_count": null, "id": "cad6387d", "metadata": {}, "outputs": [], @@ -131,7 +133,7 @@ "\n", "Set demo_mode = True to run in demonstration mode. In this mode, this\n", "notebook will download example data from the\n", - "Barbara A. Mikulski Archive for Space Telescopes (MAST) and process it through the pipeline.\n", + "Barbara A. Mikulski Archive for Space Telescopes [(MAST)](https://archive.stsci.edu/) and process it through the pipeline.\n", "This will all happen in a local directory unless modified\n", "in [Section 3](#3.-Demo-Mode-Setup-(ignore-if-not-using-demo-data)) below. \n", "\n", @@ -141,18 +143,10 @@ }, { "cell_type": "code", - "execution_count": 127, + "execution_count": null, "id": "b2f6fd5e", "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Running in demonstration mode using online example data!\n" - ] - } - ], + "outputs": [], "source": [ "# Set parameters for demo_mode, mask, filter, data mode directories, and \n", "# processing steps.\n", @@ -169,30 +163,37 @@ " # Set directory paths for processing specific data; these will need\n", " # to be changed to your local directory setup (below are given as\n", " # examples)\n", - " user_home_dir = os.path.expanduser('~')\n", + " basedir = os.path.expanduser('~')\n", "\n", " # Point to where science observation data are\n", " # Assumes uncalibrated data in sci_r1_dir/uncal/ and sci_r2_dir/uncal/, \n", " # and results in stage1, stage2, stage3 directories\n", - " sci_r1_dir = os.path.join(user_home_dir, 'FlightData/APT1386/data/Obs008/')\n", - " sci_r2_dir = os.path.join(user_home_dir, 'FlightData/APT1386/data/Obs009/')\n", + " sci_r1_dir = os.path.join(basedir, 'FlightData1836/sci_r1/')\n", + " sci_r2_dir = os.path.join(basedir, 'FlightData1836/sci_r2/')\n", "\n", - " # Point to where reference observation data are\n", + " # Point to where reference target observation data are\n", " # Assumes uncalibrated data in ref_dir/uncal/ and results in stage1,\n", " # stage2, stage3 directories\n", - " ref_dir = os.path.join(user_home_dir, 'FlightData/APT1386/data/Obs007/')\n", + " ref_targ_dir = os.path.join(basedir, 'FlightData1836/ref_targ/')\n", "\n", " # Point to where background observation data are\n", - " # Assumes uncalibrated data in sci_bg_dir/uncal/ and ref_bg_dir/uncal/,\n", + " # Assumes uncalibrated data in sci_bg_dir/uncal/ and ref_targ_bg_dir/uncal/,\n", " # and results in stage1, stage2 directories\n", - " sci_bg_dir = os.path.join(user_home_dir, 'FlightData/APT1386/data/Obs030/')\n", - " ref_bg_dir = os.path.join(user_home_dir, 'FlightData/APT1386/data/Obs031/')\n", + " bg_sci_dir = os.path.join(basedir, 'FlightData1836/bg_sci/')\n", + " bg_ref_targ_dir = os.path.join(basedir, 'FlightData1836/bg_ref_targ/')\n", + "\n", + " # Define uncal dirs\n", + " uncal_sci_r1_dir = os.path.join(sci_r1_dir, 'uncal')\n", + " uncal_sci_r2_dir = os.path.join(sci_r2_dir, 'uncal')\n", + " uncal_ref_targ_dir = os.path.join(ref_targ_dir, 'uncal')\n", + " uncal_bg_sci_dir = os.path.join(bg_sci_dir, 'uncal')\n", + " uncal_bg_ref_targ_dir = os.path.join(bg_ref_targ_dir, 'uncal')\n", "\n", "# --------------------------Set Processing Steps--------------------------\n", "# Whether or not to process only data from a given coronagraphic mask/\n", "# filter (useful if overriding reference files) \n", "# Note that BOTH parameters must be set in order to work\n", - "use_mask = '4QPM_1550' # '4QPM_1065', '4QPM_1140', '4QPM_1550', or 'LYOT_2300'\n", + "use_mask = '4QPM_1550' # '4QPM_1065', '4QPM_1140', '4QPM_1550', or 'LYOT_2300'\n", "use_filter = 'F1550C' # 'F1065C', 'F1140C', 'F1550C', or 'F2300C'\n", "\n", "# Individual pipeline stages can be turned on/off here. Note that a later\n", @@ -205,7 +206,7 @@ "docoron3 = True # calwebb_coron3\n", "\n", "# Background processing\n", - "dodet1bg = True # calwebb_detector1\n", + "dodet1bg = True # calwebb_detector1\n", "doimage2bg = True # calwebb_image2" ] }, @@ -222,19 +223,10 @@ }, { "cell_type": "code", - "execution_count": 128, + "execution_count": null, "id": "c2c53535", "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "CRDS local filepath: /home/bnickson/crds\n", - "CRDS file server: https://jwst-crds.stsci.edu\n" - ] - } - ], + "outputs": [], "source": [ "# ------------------------Set CRDS context and paths----------------------\n", "# Each version of the calibration pipeline is associated with a specific CRDS\n", @@ -244,13 +236,13 @@ "# the CRDS_CONTEXT environment variable. Here we show how this is done,\n", "# although we leave the line commented out in order to use the default context.\n", "# If you wish to specify a different context, uncomment the line below.\n", - "#%env CRDS_CONTEXT jwst_1293.pmap\n", + "#%env CRDS_CONTEXT jwst_1322.pmap\n", "\n", "# Check whether the local CRDS cache directory has been set.\n", "# If not, set it to the user home directory\n", "if (os.getenv('CRDS_PATH') is None):\n", " os.environ['CRDS_PATH'] = os.path.join(os.path.expanduser('~'), 'crds')\n", - " \n", + "\n", "# Check whether the CRDS server URL has been set. If not, set it.\n", "if (os.getenv('CRDS_SERVER_URL') is None):\n", " os.environ['CRDS_SERVER_URL'] = 'https://jwst-crds.stsci.edu'\n", @@ -273,29 +265,16 @@ "id": "8cd0c995", "metadata": {}, "source": [ - "## 2.-Package Imports\n", + "## 2.-Package Imports\n", "------------------" ] }, { "cell_type": "code", - "execution_count": 129, + "execution_count": null, "id": "61e3464a", "metadata": {}, - "outputs": [ - { - "data": { - "text/html": [ - "" - ], - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - } - ], + "outputs": [], "source": [ "# Use the entire available screen width for this notebook\n", "from IPython.display import display, HTML\n", @@ -304,7 +283,7 @@ }, { "cell_type": "code", - "execution_count": 130, + "execution_count": null, "id": "c7191bfd", "metadata": {}, "outputs": [], @@ -312,7 +291,7 @@ "# Basic system utilities for interacting with files\n", "# ----------------------General Imports------------------------------------\n", "import glob\n", - "import copy\n", + "#import copy\n", "import time\n", "from pathlib import Path\n", "import re\n", @@ -324,10 +303,10 @@ "# Astropy utilities for opening FITS and ASCII files, and downloading demo files\n", "from astropy.io import fits\n", "from astropy.wcs import WCS\n", - "from astropy import units\n", - "from astropy.coordinates import SkyCoord, Distance\n", - "from astropy import time\n", - "from astroquery.mast import Observations, Mast\n", + "from astropy.coordinates import SkyCoord\n", + "\n", + "#from astropy import time\n", + "from astroquery.mast import Observations\n", "\n", "# -----------------------Plotting Imports----------------------------------\n", "# Matplotlib for making plots\n", @@ -336,22 +315,13 @@ }, { "cell_type": "code", - "execution_count": 131, + "execution_count": null, "id": "59fdfe7e", "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "JWST Calibration Pipeline Version = 1.17.1\n", - "Using CRDS Context = jwst_1322.pmap\n" - ] - } - ], + "outputs": [], "source": [ "# --------------JWST Calibration Pipeline Imports---------------------------\n", - "# Import the base JWST and calibration reference data packages\n", + "# Import the base JWST and calibration reference files packages\n", "import jwst\n", "import crds\n", "\n", @@ -378,33 +348,75 @@ "id": "0f6131e0", "metadata": {}, "source": [ - "### Define convenience functions" + "### Define convenience functions\n", + "\n", + "Define a convenience function to select only files of a given coronagraph mask/filter from an input set" ] }, { "cell_type": "code", "execution_count": null, - "id": "a62c2c44-1fc8-4041-a24f-a5d00a3a9ceb", + "id": "f087bddb", "metadata": {}, "outputs": [], "source": [ "# Define a convenience function to select only files of a given coronagraph mask/filter from an input set\n", "def select_mask_filter_files(files, use_mask, use_filter):\n", - " if (use_mask != '') & (use_filter != ''):\n", - " keep = np.zeros(len(files))\n", - " for ii in range(0, len(files)):\n", - " with fits.open(files[ii]) as hdu:\n", - " hdu.verify()\n", - " hdr = hdu[0].header\n", - " if 'CORONMSK' in hdr: \n", - " if ((hdr['CORONMSK'] == use_mask) & (hdr['FILTER'] == use_filter)):\n", - " keep[ii] = 1\n", - " indx = np.where(keep == 1)\n", - " files_culled = files[indx]\n", - " else:\n", - " files_culled = files\n", - " \n", - " return files_culled" + " \"\"\"\n", + " Filter FITS files based on mask and filter criteria from their headers.\n", + "\n", + " Parameters:\n", + " -----------\n", + " files : array-like\n", + " List of FITS file paths to process\n", + " use_mask : str\n", + " Mask value to match in FITS header 'CORONMSK' key\n", + " use_filter : str\n", + " Filter value to match in FITS header 'FILTER' key\n", + "\n", + " Returns:\n", + " --------\n", + " numpy.ndarray\n", + " Filtered array of file paths matching the criteria\n", + " \"\"\"\n", + "\n", + " # Make paths absolute paths\n", + " for i in range(len(files)):\n", + " files[i] = os.path.abspath(files[i])\n", + "\n", + " # Convert files to numpy array if it isn't already\n", + " files = np.asarray(files)\n", + "\n", + " # If either mask or filter is empty, return all files\n", + " if not use_mask or not use_filter:\n", + " return files\n", + "\n", + " try:\n", + " # Initialize boolean array for keeping track of matches\n", + " keep = np.zeros(len(files), dtype=bool)\n", + "\n", + " # Process each file\n", + " for i in range(len(files)):\n", + " try:\n", + " with fits.open(files[i]) as hdu:\n", + " hdu.verify()\n", + " hdr = hdu[0].header\n", + "\n", + " # Check if requred header keywords exist\n", + " if ('CORONMSK' in hdr and 'FILTER' in hdr):\n", + " if hdr['CORONMSK'] == use_mask and hdr['FILTER'] == use_filter:\n", + " keep[i] = True\n", + " files[i] = os.path.abspath(files[i])\n", + " except (OSError, ValueError) as e:\n", + " print(f\" Warning: could not process file {files[i]}: {str(e)}\")\n", + "\n", + " # Return filtered files\n", + " indx = np.where(keep)\n", + " return files[indx]\n", + "\n", + " except Exception as e:\n", + " print(f\"Error processing files: {str(e)}\")\n", + " return files # Return original array in case of failure" ] }, { @@ -431,7 +443,7 @@ "id": "c0cfec9f", "metadata": {}, "source": [ - "3.-Demo Mode Setup (ignore if not using demo data)\n", + "## 3.-Demo Mode Setup (ignore if not using demo data)\n", "------------------\n", "\n", "If running in demonstration mode, set up the program information to\n", @@ -443,7 +455,7 @@ "For illustrative purposes, we focus on data taken through the MIRI\n", "[F1550C filter](https://jwst-docs.stsci.edu/jwst-mid-infrared-instrument/miri-observing-modes/miri-coronagraphic-imaging#MIRICoronagraphicImaging-CoronFiltersCoronagraphfilters)\n", "and start with uncalibrated raw data products (`uncal.fits`). The files use the following naming schema:\n", - "`jw0138600001_04101_0000_mirimage_uncal.fits`, where *obs* refers to the observation number and *dith* refers to the\n", + "`jw01386001_04101_0000_mirimage_uncal.fits`, where *obs* refers to the observation number and *dith* refers to the\n", "dither step number.\n", "\n", "\n", @@ -464,28 +476,30 @@ " program = \"01386\"\n", " sci_r1_observtn = \"008\" \n", " sci_r2_observtn = \"009\" \n", - " ref_observtn = \"007\" \n", + " ref_targ_observtn = \"007\" \n", " bg_sci_observtn = \"030\" \n", - " bg_ref_observtn = \"031\"\n", + " bg_ref_targ_observtn = \"031\"\n", + "\n", + " # ----------Define the base and observation directories----------\n", " basedir = os.path.join('.', 'miri_coro_demo_data')\n", " download_dir = basedir\n", " sci_r1_dir = os.path.join(basedir, 'Obs' + sci_r1_observtn)\n", " sci_r2_dir = os.path.join(basedir, 'Obs' + sci_r2_observtn)\n", - " ref_dir = os.path.join(basedir, 'Obs' + ref_observtn)\n", + " ref_targ_dir = os.path.join(basedir, 'Obs' + ref_targ_observtn)\n", " bg_sci_dir = os.path.join(basedir, 'Obs' + bg_sci_observtn)\n", - " bg_ref_dir = os.path.join(basedir, 'Obs' + bg_ref_observtn)\n", + " bg_ref_targ_dir = os.path.join(basedir, 'Obs' + bg_ref_targ_observtn)\n", " uncal_sci_r1_dir = os.path.join(sci_r1_dir, 'uncal')\n", " uncal_sci_r2_dir = os.path.join(sci_r2_dir, 'uncal')\n", - " uncal_ref_dir = os.path.join(ref_dir, 'uncal')\n", + " uncal_ref_targ_dir = os.path.join(ref_targ_dir, 'uncal')\n", " uncal_bg_sci_dir = os.path.join(bg_sci_dir, 'uncal')\n", - " uncal_bg_ref_dir = os.path.join(bg_ref_dir, 'uncal')\n", + " uncal_bg_ref_targ_dir = os.path.join(bg_ref_targ_dir, 'uncal')\n", "\n", " # Ensure filepaths for input data exist\n", - " input_dirs = [uncal_sci_r1_dir, uncal_sci_r2_dir, uncal_ref_dir, uncal_bg_sci_dir, uncal_bg_ref_dir]\n", + " input_dirs = [uncal_sci_r1_dir, uncal_sci_r2_dir, uncal_ref_targ_dir, uncal_bg_sci_dir, uncal_bg_ref_targ_dir]\n", "\n", " for dir in input_dirs:\n", " if not os.path.exists(dir):\n", - " os.makedirs(dir)" + " os.makedirs(dir)" ] }, { @@ -506,9 +520,8 @@ "# Obtain a list of observation IDs for the specified demo program\n", "if demo_mode:\n", " obs_id_table = Observations.query_criteria(instrument_name=[\"MIRI/CORON\"],\n", - " provenance_name=[\"CALJWST\"],\n", - " proposal_id=[program]\n", - " )\n" + " provenance_name=[\"CALJWST\"],\n", + " proposal_id=[program])" ] }, { @@ -532,28 +545,31 @@ " productSubGroupDescription=query_dict['productSubGroupDescription'],\n", " calib_level=query_dict['calib_level'])\n", " files_to_download.extend(filtered_products['dataURI'])\n", - " \n", "\n", " # Cull to a unique list of files for each observation type \n", " # Science roll 1 \n", " sci_r1_files_to_download = []\n", - " sci_r1_files_to_download = np.unique([i for i in files_to_download if str(program+sci_r1_observtn) in i])\n", + " sci_r1_files_to_download = np.unique([i for i in files_to_download if str(program + sci_r1_observtn) in i])\n", + "\n", " # Science roll 2 \n", " sci_r2_files_to_download = []\n", - " sci_r2_files_to_download = np.unique([i for i in files_to_download if str(program+sci_r2_observtn) in i])\n", - " # PSF Reference files\n", - " ref_files_to_download = []\n", - " ref_files_to_download = np.unique([i for i in files_to_download if str(program+ref_observtn) in i])\n", + " sci_r2_files_to_download = np.unique([i for i in files_to_download if str(program + sci_r2_observtn) in i])\n", + "\n", + " # PSF Reference taraget data\n", + " ref_targ_files_to_download = []\n", + " ref_targ_files_to_download = np.unique([i for i in files_to_download if str(program + ref_targ_observtn) in i])\n", + "\n", " # Background files (science assoc.)\n", " bg_sci_files_to_download = []\n", - " bg_sci_files_to_download = np.unique([i for i in files_to_download if str(program+bg_sci_observtn) in i])\n", - " # Background files (reference assoc.)\n", - " bg_ref_files_to_download = [] \n", - " bg_ref_files_to_download = np.unique([i for i in files_to_download if str(program+bg_ref_observtn) in i])\n", + " bg_sci_files_to_download = np.unique([i for i in files_to_download if str(program + bg_sci_observtn) in i])\n", "\n", - " print(\"Science files selected for downloading: \", len(sci_r1_files_to_download)+len(sci_r1_files_to_download))\n", - " print(\"PSF Reference files selected for downloading: \", len(ref_files_to_download))\n", - " print(\"Background selected for downloading: \", len(bg_sci_files_to_download)+len(bg_ref_files_to_download))" + " # Background files (reference target assoc.)\n", + " bg_ref_targ_files_to_download = [] \n", + " bg_ref_targ_files_to_download = np.unique([i for i in files_to_download if str(program + bg_ref_targ_observtn) in i])\n", + "\n", + " print(\"Science files selected for downloading: \", len(sci_r1_files_to_download) + len(sci_r1_files_to_download))\n", + " print(\"PSF Reference target files selected for downloading: \", len(ref_targ_files_to_download))\n", + " print(\"Background selected for downloading: \", len(bg_sci_files_to_download) + len(bg_ref_targ_files_to_download))" ] }, { @@ -579,16 +595,16 @@ "outputs": [], "source": [ "if demo_mode:\n", - " #for filename in sci_r1_files_to_download:\n", - " # sci_r1_manifest = Observations.download_file(filename, local_path=os.path.join(uncal_sci_r1_dir, Path(filename).name))\n", + " for filename in sci_r1_files_to_download:\n", + " sci_r1_manifest = Observations.download_file(filename, local_path=os.path.join(uncal_sci_r1_dir, Path(filename).name))\n", " for filename in sci_r2_files_to_download:\n", " sci_r2_manifest = Observations.download_file(filename, local_path=os.path.join(uncal_sci_r2_dir, Path(filename).name))\n", - " #for filename in ref_files_to_download:\n", - " # ref_manifest = Observations.download_file(filename, local_path=os.path.join(uncal_ref_dir, Path(filename).name))\n", - " #for filename in bg_sci_files_to_download:\n", - " # bg_manifest = Observations.download_file(filename, local_path=os.path.join(uncal_bg_sci_dir, Path(filename).name))\n", - " #for filename in bg_ref_files_to_download:\n", - " # bg_manifest = Observations.download_file(filename, local_path=os.path.join(uncal_bg_ref_dir, Path(filename).name))" + " for filename in ref_targ_files_to_download:\n", + " ref_targ_manifest = Observations.download_file(filename, local_path=os.path.join(uncal_ref_targ_dir, Path(filename).name))\n", + " for filename in bg_sci_files_to_download:\n", + " bg_manifest = Observations.download_file(filename, local_path=os.path.join(uncal_bg_sci_dir, Path(filename).name))\n", + " for filename in bg_ref_targ_files_to_download:\n", + " bg_ref_targ_manifest = Observations.download_file(filename, local_path=os.path.join(uncal_bg_ref_targ_dir, Path(filename).name))" ] }, { @@ -604,7 +620,7 @@ "id": "4ae87477", "metadata": {}, "source": [ - "4.-Directory Setup\n", + "## 4.-Directory Setup\n", "------------------\n", "Set up detailed paths to input/output stages here. We will set up individual `stage1/` and `stage2/` sub directories for each observation, but a single `stage3/` directory for the combined [calwebb_coron3 output products](https://jwst-pipeline.readthedocs.io/en/stable/jwst/pipeline/calwebb_coron3.html)." ] @@ -625,25 +641,25 @@ "det1_sci_r2_dir = os.path.join(sci_r2_dir, 'stage1') # calwebb_detector1 pipeline outputs will go here\n", "image2_sci_r2_dir = os.path.join(sci_r2_dir, 'stage2') # calwebb_image2 pipeline outputs will go here\n", "\n", - "# Define output subdirectories to keep PSF reference data products organized\n", - "det1_ref_dir = os.path.join(ref_dir, 'stage1') # calwebb_detector1 pipeline outputs will go here\n", - "image2_ref_dir = os.path.join(ref_dir, 'stage2') # calwebb_image2 pipeline outputs will go here\n", + "# Define output subdirectories to keep PSF reference target data products organized\n", + "det1_ref_targ_dir = os.path.join(ref_targ_dir, 'stage1') # calwebb_detector1 pipeline outputs will go here\n", + "image2_ref_targ_dir = os.path.join(ref_targ_dir, 'stage2') # calwebb_image2 pipeline outputs will go here\n", "\n", "# Define output subdirectories to keep background data products organized\n", "# Sci Bkg\n", "det1_bg_sci_dir = os.path.join(bg_sci_dir, 'stage1') # calwebb_detector1 pipeline outputs will go here\n", "image2_bg_sci_dir = os.path.join(bg_sci_dir, 'stage2') # calwebb_image2 pipeline outputs will go here\n", "\n", - "# Ref Bkg\n", - "det1_bg_ref_dir = os.path.join(bg_ref_dir, 'stage1') # calwebb_detector1 pipeline outputs will go here\n", - "image2_bg_ref_dir = os.path.join(bg_ref_dir, 'stage2') # calwebb_image2 pipeline outputs will go here\n", + "# Ref target Bkg\n", + "det1_bg_ref_targ_dir = os.path.join(bg_ref_targ_dir, 'stage1') # calwebb_detector1 pipeline outputs will go here\n", + "image2_bg_ref_targ_dir = os.path.join(bg_ref_targ_dir, 'stage2') # calwebb_image2 pipeline outputs will go here\n", "\n", "# Single stage3 directory for combined coron3 products.\n", "coron3_dir = os.path.join(basedir, 'stage3')\n", "\n", "# We need to check that the desired output directories exist, and if not create them\n", - "det1_dirs = [det1_sci_r1_dir, det1_sci_r2_dir, det1_ref_dir, det1_bg_sci_dir, det1_bg_ref_dir]\n", - "image2_dirs = [image2_sci_r1_dir, image2_sci_r2_dir, image2_ref_dir, image2_bg_sci_dir, image2_bg_ref_dir]\n", + "det1_dirs = [det1_sci_r1_dir, det1_sci_r2_dir, det1_ref_targ_dir, det1_bg_sci_dir, det1_bg_ref_targ_dir]\n", + "image2_dirs = [image2_sci_r1_dir, image2_sci_r2_dir, image2_ref_targ_dir, image2_bg_sci_dir, image2_bg_ref_targ_dir]\n", "\n", "for dir in det1_dirs:\n", " if not os.path.exists(dir):\n", @@ -680,14 +696,13 @@ "id": "b6603987-5168-45b4-b527-d1572de46495", "metadata": {}, "source": [ - "5.-Detector1 Pipeline\n", + "## 5.-Detector1 Pipeline\n", "------------------\n", "In this section, we process our uncalibrated data through the calwebb_detector1 pipeline to create Stage 1 data products. For coronagraphic exposures, these data products include a `*_rate.fits` file (a 2D countrate product, based on averaging over all integrations in the exposure), but specifically also a `*_rateints.fits` file, a 3D countrate product, that contains the individual results of each integration, wherein 2D countrate images for each integration are stacked along the 3rd axis of the data cubes (ncols x nrows x nints). These data products have units of DN/s.\n", "\n", "See https://jwst-docs.stsci.edu/jwst-science-calibration-pipeline/stages-of-jwst-data-processing/calwebb_detector1\n", "\n", - "By default, all steps in the calwebb_detector1 are run for MIRI except: the [ipc](https://jwst-pipeline.readthedocs.io/en/stable/jwst/ipc/index.html#ipc-step) and [charge_migration](https://jwst-pipeline.readthedocs.io/en/stable/jwst/charge_migration/index.html#charge-migration-step) steps. There are also several steps performed for MIRI data that are not performed for other instruments. These include: [emicorr](https://jwst-pipeline.readthedocs.io/en/latest/jwst/emicorr/index.html#emicorr-step), [firstframe](https://jwst-pipeline.readthedocs.io/en/latest/jwst/firstframe/index.html#firstframe-step), [lastframe](https://jwst-pipeline.readthedocs.io/en/latest/jwst/lastframe/index.html#lastframe-step), [reset](https://jwst-pipeline.readthedocs.io/en/latest/jwst/reset/index.html#reset-step) and [rscd](https://jwst-pipeline.readthedocs.io/en/latest/jwst/rscd/index.html#rscd-step).\n", - "\n" + "By default, all steps in the calwebb_detector1 are run for MIRI except: the [ipc](https://jwst-pipeline.readthedocs.io/en/stable/jwst/ipc/index.html#ipc-step) and [charge_migration](https://jwst-pipeline.readthedocs.io/en/stable/jwst/charge_migration/index.html#charge-migration-step) steps. There are also several steps performed for MIRI data that are not performed for other instruments. These include: [emicorr](https://jwst-pipeline.readthedocs.io/en/latest/jwst/emicorr/index.html#emicorr-step), [firstframe](https://jwst-pipeline.readthedocs.io/en/latest/jwst/firstframe/index.html#firstframe-step), [lastframe](https://jwst-pipeline.readthedocs.io/en/latest/jwst/lastframe/index.html#lastframe-step), [reset](https://jwst-pipeline.readthedocs.io/en/latest/jwst/reset/index.html#reset-step) and [rscd](https://jwst-pipeline.readthedocs.io/en/latest/jwst/rscd/index.html#rscd-step)." ] }, { @@ -715,6 +730,7 @@ "det1dict['group_scale'], det1dict['dq_init'], det1dict['emicorr'], det1dict['saturation'] = {}, {}, {}, {}\n", "det1dict['firstframe'], det1dict['lastframe'], det1dict['reset'], det1dict['linearity'], det1dict['rscd'] = {}, {}, {}, {}, {}\n", "det1dict['dark_current'], det1dict['refpix'], det1dict['jump'], det1dict['ramp_fit'], det1dict['gain_scale'] = {}, {}, {}, {}, {}\n", + "det1dict['clean_flicker_noise'] = {}\n", "\n", "# Overrides for whether or not certain steps should be skipped (example)\n", "# skipping refpix step\n", @@ -723,22 +739,22 @@ "# Overrides for various reference files\n", "# Files should be in the base local directory or provide full path\n", "#det1dict['dq_init']['override_mask'] = 'myfile.fits' # Bad pixel mask\n", - "#det1dict['saturation']['override_saturation'] = 'myfile.fits' # Saturation\n", - "#det1dict['reset']['override_reset'] = 'myfile.fits' # Reset\n", - "#det1dict['linearity']['override_linearity'] = 'myfile.fits' # Linearity\n", - "#det1dict['rscd']['override_rscd'] = 'myfile.fits' # RSCD\n", - "#det1dict['dark_current']['override_dark'] = 'myfile.fits' # Dark current subtraction\n", - "#det1dict['jump']['override_gain'] = 'myfile.fits' # Gain used by jump step\n", - "#det1dict['ramp_fit']['override_gain'] = 'myfile.fits' # Gain used by ramp fitting step\n", - "#det1dict['jump']['override_readnoise'] = 'myfile.fits' # Read noise used by jump step\n", - "#det1dict['ramp_fit']['override_readnoise'] = 'myfile.fits' # Read noise used by ramp fitting step\n", + "#det1dict['saturation']['override_saturation'] = 'myfile.fits' # Saturation\n", + "#det1dict['reset']['override_reset'] = 'myfile.fits' # Reset\n", + "#det1dict['linearity']['override_linearity'] = 'myfile.fits' # Linearity\n", + "#det1dict['rscd']['override_rscd'] = 'myfile.fits' # RSCD\n", + "#det1dict['dark_current']['override_dark'] = 'myfile.fits' # Dark current subtraction\n", + "#det1dict['jump']['override_gain'] = 'myfile.fits' # Gain used by jump step\n", + "#det1dict['ramp_fit']['override_gain'] = 'myfile.fits' # Gain used by ramp fitting step\n", + "#det1dict['jump']['override_readnoise'] = 'myfile.fits' # Read noise used by jump step\n", + "#det1dict['ramp_fit']['override_readnoise'] = 'myfile.fits' # Read noise used by ramp fitting step\n", "\n", "# Turn on multi-core processing (off by default). Choose what fraction of cores to use (quarter, half, or all)\n", "det1dict['jump']['maximum_cores'] = 'half' \n", "det1dict['ramp_fit']['maximum_cores'] = 'half'\n", "\n", "# Save the frame-averaged dark data created during the dark current subtraction step\n", - "det1dict['dark_current']['dark_output'] = 'dark.fits' # Frame-averaged dark \n", + "det1dict['dark_current']['dark_output'] = 'dark.fits' # Frame-averaged dark \n", "\n", "# Turn on detection of cosmic ray showers (off by default)\n", "#det1dict['jump']['find_showers'] = True" @@ -805,24 +821,14 @@ "sstring1 = os.path.join(uncal_sci_r1_dir, 'jw*mirimage*uncal.fits')\n", "sstring2 = os.path.join(uncal_sci_r2_dir, 'jw*mirimage*uncal.fits')\n", "\n", - "uncal_sci_r1_files = np.array(sorted(glob.glob(sstring1)))\n", - "uncal_sci_r2_files = np.array(sorted(glob.glob(sstring2)))\n", + "uncal_sci_r1_files = sorted(glob.glob(sstring1))\n", + "uncal_sci_r2_files = sorted(glob.glob(sstring2))\n", "\n", "# Check that these are the correct mask/filter to use\n", "uncal_sci_r1_files = select_mask_filter_files(uncal_sci_r1_files, use_mask, use_filter)\n", "uncal_sci_r2_files = select_mask_filter_files(uncal_sci_r2_files, use_mask, use_filter)\n", "\n", - "print('Found ' + str((len(uncal_sci_r1_files)+len(uncal_sci_r2_files))) + ' science input files')" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "2d610e7f-0332-4a7a-8017-525da7aaf2f1", - "metadata": {}, - "outputs": [], - "source": [ - "sstring1" + "print('Found ' + str((len(uncal_sci_r1_files) + len(uncal_sci_r2_files))) + ' science input files')" ] }, { @@ -837,8 +843,9 @@ "# Run the pipeline on these input files by a simple loop over files using\n", "# our custom parameter dictionary\n", "if dodet1:\n", - " #for file in uncal_sci_r1_files:\n", - " # Detector1Pipeline.call(file, steps=det1dict, save_results=True, output_dir=det1_sci_r1_dir)\n", + " for file in uncal_sci_r1_files:\n", + " Detector1Pipeline.call(file, steps=det1dict, save_results=True, output_dir=det1_sci_r1_dir)\n", + "\n", " for file in uncal_sci_r2_files:\n", " Detector1Pipeline.call(file, steps=det1dict, save_results=True, output_dir=det1_sci_r2_dir)\n", "else:\n", @@ -850,11 +857,8 @@ "id": "1dba8c73-ce57-48e4-97c7-39504f777be3", "metadata": {}, "source": [ - "### Calibrating PSF Reference Files\n", - "Look for input PSF Reference files and run calwebb_detector1\n", - "pipeline using the call method. \n", - "\n", - "There should be 9 files in total, one for each exposure of the PSF reference target taken in the 9-point dither pattern. " + "### Calibrating PSF Reference Target Files\n", + "Look for input PSF Reference Target files. There should be 9 files in total, one for each exposure of the PSF reference target taken in the 9-point dither pattern. " ] }, { @@ -866,12 +870,21 @@ "source": [ "# Now let's look for input files of the form *uncal.fits from the background\n", "# observations\n", - "sstring = os.path.join(uncal_ref_dir, 'jw*mirimage*uncal.fits')\n", - "uncal_ref_files = np.array(sorted(glob.glob(sstring)))\n", + "sstring = os.path.join(uncal_ref_targ_dir, 'jw*mirimage*uncal.fits')\n", + "uncal_ref_targ_files = sorted(glob.glob(sstring))\n", + "\n", "# Check that these are the band/channel to use\n", - "uncal_ref_files = select_mask_filter_files(uncal_ref_files, use_mask, use_filter)\n", + "uncal_ref_targ_files = select_mask_filter_files(uncal_ref_targ_files, use_mask, use_filter)\n", "\n", - "print('Found ' + str(len(uncal_ref_files)) + ' PSF reference input files')" + "print('Found ' + str(len(uncal_ref_targ_files)) + ' PSF reference input files')" + ] + }, + { + "cell_type": "markdown", + "id": "c8929e22", + "metadata": {}, + "source": [ + "Runs calwebb_detector1 module on the reference target files using the same custom parameter dictionary." ] }, { @@ -884,8 +897,9 @@ "# Run the pipeline on these input files by a simple loop over files using\n", "# our custom parameter dictionary\n", "if dodet1:\n", - " for file in uncal_ref_files:\n", - " Detector1Pipeline.call(file, steps=det1dict, save_results=True, output_dir=det1_ref_dir)\n", + " for file in uncal_ref_targ_files:\n", + " print(file)\n", + " Detector1Pipeline.call(file, steps=det1dict, save_results=True, output_dir=det1_ref_targ_dir)\n", "else:\n", " print('Skipping Detector1 processing for PSF reference data')" ] @@ -912,16 +926,16 @@ "# Look for input files of the form *uncal.fits from the background\n", "# observations\n", "sstring1 = os.path.join(uncal_bg_sci_dir, 'jw*mirimage*uncal.fits')\n", - "sstring2 = os.path.join(uncal_bg_ref_dir, 'jw*mirimage*uncal.fits')\n", + "sstring2 = os.path.join(uncal_bg_ref_targ_dir, 'jw*mirimage*uncal.fits')\n", "\n", - "uncal_bg_sci_files = np.array(sorted(glob.glob(sstring1)))\n", - "uncal_bg_ref_files = np.array(sorted(glob.glob(sstring2)))\n", + "uncal_bg_sci_files = sorted(glob.glob(sstring1))\n", + "uncal_bg_ref_targ_files = sorted(glob.glob(sstring2))\n", "\n", "# Check that these are the filter to use\n", "uncal_bg_sci_files = select_mask_filter_files(uncal_bg_sci_files, use_mask, use_filter)\n", - "uncal_bg_ref_files = select_mask_filter_files(uncal_bg_ref_files, use_mask, use_filter)\n", + "uncal_bg_ref_targ_files = select_mask_filter_files(uncal_bg_ref_targ_files, use_mask, use_filter)\n", "\n", - "print('Found ' + str((len(uncal_bg_sci_files)+len(uncal_bg_ref_files))) + ' background input files')" + "print('Found ' + str((len(uncal_bg_sci_files) + len(uncal_bg_ref_targ_files))) + ' background input files')" ] }, { @@ -938,8 +952,8 @@ "if dodet1bg:\n", " for file in uncal_bg_sci_files:\n", " Detector1Pipeline.call(file, steps=det1dict, save_results=True, output_dir=det1_bg_sci_dir)\n", - " for file in uncal_bg_ref_files:\n", - " Detector1Pipeline.call(file, steps=det1dict, save_results=True, output_dir=det1_bg_ref_dir)\n", + " for file in uncal_bg_ref_targ_files:\n", + " Detector1Pipeline.call(file, steps=det1dict, save_results=True, output_dir=det1_bg_ref_targ_dir)\n", "else:\n", " print('Skipping Detector1 processing for BG data')" ] @@ -969,7 +983,7 @@ "id": "81419572", "metadata": {}, "source": [ - "6.-Image2 Pipeline\n", + "## 6.-Image2 Pipeline\n", "------------------\n", "\n", "In this section we process our 3D countrate (`rateints`) products from\n", @@ -1065,7 +1079,7 @@ "source": [ "def writel2asn(onescifile, bgfiles, asnfile, prodname):\n", " # Define the basic association of science files\n", - " asn = afl.asn_from_list([onescifile], rule=DMSLevel2bBase, product_name=prodname) # Wrap in array since input was single exposure\n", + " asn = afl.asn_from_list([onescifile], rule=DMSLevel2bBase, product_name=prodname) # Wrap in array since input was single exposure\n", "\n", " #Coron/filter configuration for this sci file\n", " with fits.open(onescifile) as hdu:\n", @@ -1077,7 +1091,7 @@ " for file in bgfiles:\n", " hdu.verify()\n", " hdr = hdu[0].header\n", - " if hdr['FILTER'] == this_filter:\n", + " if hdr['FILTER'] == this_filter and hdr['CORONMSK'] == this_mask:\n", " asn['products'][0]['members'].append({'expname': file, 'exptype': 'background'})\n", "\n", " # Write the association to a json file\n", @@ -1091,7 +1105,7 @@ "id": "a893f72c", "metadata": {}, "source": [ - "Find and sort all of the input files, ensuring use of absolute paths. \n", + "Find and sort all of the input files for the selected filter and coronagraphic mask, ensuring use of absolute paths. \n", "\n", "The input files should be `rateints.fits` products and there should be a total of 2 files corresponding to the science target; 9 files corresponding to the reference target; 2 files corresponding to the science background target and 2 files corresponding to the reference background target." ] @@ -1103,53 +1117,39 @@ "metadata": {}, "outputs": [], "source": [ - "# Science Files \n", + "# Identify Science Files \n", "# Roll 1\n", "sstring = os.path.join(det1_sci_r1_dir, 'jw*mirimage*rateints.fits') # Use files from the detector1 output folder\n", "sci_r1_files = sorted(glob.glob(sstring))\n", - "for ii in range(0, len(sci_r1_files)):\n", - " sci_r1_files[ii] = os.path.abspath(sci_r1_files[ii])\n", - "sci_r1_files = np.array(sci_r1_files)\n", + "\n", "# Check that these are the mask/filter to use\n", "sci_r1_files = select_mask_filter_files(sci_r1_files, use_mask, use_filter)\n", + "\n", "# Roll 2\n", "sstring = os.path.join(det1_sci_r2_dir, 'jw*mirimage*rateints.fits') # Use files from the detector1 output folder\n", "sci_r2_files = sorted(glob.glob(sstring))\n", - "for ii in range(0, len(sci_r2_files)):\n", - " sci_r2_files[ii] = os.path.abspath(sci_r2_files[ii])\n", - "sci_r2_files = np.array(sci_r2_files)\n", "sci_r2_files = select_mask_filter_files(sci_r2_files, use_mask, use_filter)\n", "\n", - "# PSF Ref Files\n", - "sstring = os.path.join(det1_ref_dir, 'jw*mirimage*rateints.fits')\n", - "ref_files = sorted(glob.glob(sstring))\n", - "for ii in range(0, len(ref_files)):\n", - " ref_files[ii] = os.path.abspath(ref_files[ii])\n", - "ref_files = np.array(ref_files)\n", - "ref_files = select_mask_filter_files(ref_files, use_mask, use_filter)\n", + "# Identify PSF Ref Target Files\n", + "sstring = os.path.join(det1_ref_targ_dir, 'jw*mirimage*rateints.fits')\n", + "ref_targ_files = sorted(glob.glob(sstring))\n", + "ref_targ_files = select_mask_filter_files(ref_targ_files, use_mask, use_filter)\n", "\n", "# Background Files\n", "# Sci Bkg\n", "sstring = os.path.join(det1_bg_sci_dir, 'jw*mirimage*rateints.fits')\n", "bg_sci_files = sorted(glob.glob(sstring))\n", - "for ii in range(0, len(bg_sci_files)):\n", - " bg_sci_files[ii] = os.path.abspath(bg_sci_files[ii])\n", - "bg_sci_files = np.array(bg_sci_files)\n", "bg_sci_files = select_mask_filter_files(bg_sci_files, use_mask, use_filter)\n", "\n", - "# Ref Bkg \n", - "sstring = os.path.join(det1_bg_ref_dir, 'jw*mirimage*rateints.fits')\n", - "bg_ref_files = sorted(glob.glob(sstring))\n", - "for ii in range(0, len(bg_ref_files)):\n", - " bg_ref_files[ii] = os.path.abspath(bg_ref_files[ii])\n", - "bg_ref_files = np.array(bg_ref_files)\n", - "# Check that these are the mask/filter to use\n", - "bg_ref_files = select_mask_filter_files(bg_ref_files, use_mask, use_filter)\n", + "# Ref target Bkg \n", + "sstring = os.path.join(det1_bg_ref_targ_dir, 'jw*mirimage*rateints.fits')\n", + "bg_ref_targ_files = sorted(glob.glob(sstring))\n", + "bg_ref_targ_files = select_mask_filter_files(bg_ref_targ_files, use_mask, use_filter)\n", "\n", "print('Found ' + str(len(sci_r1_files) + len(sci_r2_files)) + ' science files')\n", - "print('Found ' + str(len(ref_files)) + ' reference files')\n", + "print('Found ' + str(len(ref_targ_files)) + ' reference files')\n", "print('Found ' + str(len(bg_sci_files)) + ' science background files')\n", - "print('Found ' + str(len(bg_ref_files)) + ' reference background files')" + "print('Found ' + str(len(bg_ref_targ_files)) + ' reference background files')" ] }, { @@ -1157,7 +1157,7 @@ "id": "cbefec2c", "metadata": {}, "source": [ - "Step through each of the science files, using relevant associated backgrounds in calwebb_image2 processing." + "Step through each of the science files for both rolls. First creates the association file using relevant associated backgrounds and then runs calwebb_image2 processing." ] }, { @@ -1170,10 +1170,11 @@ "if doimage2:\n", " # Science Roll 1\n", " # Generate a proper background-subtracting association file\n", - " #for file in sci_r1_files:\n", - " # asnfile = os.path.join(image2_sci_r1_dir, 'l2asn.json')\n", - " # writel2asn(file, bg_sci_files, asnfile, 'Level2')\n", - " # Image2Pipeline.call(asnfile, steps=image2dict, save_bsub=True, save_results=True, output_dir=image2_sci_r1_dir)\n", + " for file in sci_r1_files:\n", + " asnfile = os.path.join(image2_sci_r1_dir, 'l2asn.json')\n", + " writel2asn(file, bg_sci_files, asnfile, 'Level2')\n", + " Image2Pipeline.call(asnfile, steps=image2dict, save_bsub=True, save_results=True, output_dir=image2_sci_r1_dir)\n", + "\n", " # Science Roll 2\n", " # Generate a proper background-subtracting association file\n", " for file in sci_r2_files:\n", @@ -1184,6 +1185,14 @@ " print('Skipping Image2 processing for SCI data')" ] }, + { + "cell_type": "markdown", + "id": "a52152a8", + "metadata": {}, + "source": [ + "Step through each of the reference target files. First creates the association file using relevant associated backgrounds and then runs calwebb_image2 processing." + ] + }, { "cell_type": "code", "execution_count": null, @@ -1192,15 +1201,16 @@ "outputs": [], "source": [ "if doimage2:\n", - " for file in ref_files:\n", + " for file in ref_targ_files:\n", " # Extract the dither number to use in asn filename\n", " match = re.compile(r'(\\d{5})_mirimage').search(file)\n", + "\n", " # Generate a proper background-subtracting association file\n", - " asnfile = os.path.join(image2_ref_dir, match.group(1)+'_l2asn.json')\n", - " writel2asn(file, bg_ref_files, asnfile, 'Level2')\n", - " Image2Pipeline.call(asnfile, steps=image2dict, save_bsub=True, save_results=True, output_dir=image2_ref_dir) \n", + " asnfile = os.path.join(image2_ref_targ_dir, match.group(1) + '_l2asn.json')\n", + " writel2asn(file, bg_ref_targ_files, asnfile, 'Level2')\n", + " Image2Pipeline.call(asnfile, steps=image2dict, save_bsub=True, save_results=True, output_dir=image2_ref_targ_dir) \n", "else:\n", - " print('Skipping Image2 processing for PSF REF data')" + " print('Skipping Image2 processing for PSF REF target data')" ] }, { @@ -1208,7 +1218,7 @@ "id": "224e3bd3", "metadata": {}, "source": [ - "Reduce the backgrounds individually." + "Process the backgrounds for science and reference target through calwebb_image2 individually. This is needed if doing master background subtraction in Stage 3." ] }, { @@ -1221,8 +1231,9 @@ "if doimage2bg:\n", " for file in bg_sci_files:\n", " Image2Pipeline.call(file, steps=image2dict, save_results=True, output_dir=image2_bg_sci_dir)\n", - " for file in bg_ref_files:\n", - " Image2Pipeline.call(file, steps=image2dict, save_results=True, output_dir=image2_bg_ref_dir)\n", + "\n", + " for file in bg_ref_targ_files:\n", + " Image2Pipeline.call(file, steps=image2dict, save_results=True, output_dir=image2_bg_ref_targ_dir)\n", "else:\n", " print('Skipping Image2 processing for BG data')" ] @@ -1253,7 +1264,7 @@ "id": "64771f88", "metadata": {}, "source": [ - "7.-Coron3 Pipeline\n", + "## 7.-Coron3 Pipeline\n", "------------------\n", "In this section, we'll run the Coron3 (calwebb_coron3) pipeline on the calibrated MIRI coronagraphic exposures to produce PSF-subtracted, resampled, combined images of the source object. The input to calwebb_coron3 must be in the form of an association file that lists one or more exposures of a science target and one or more reference PSF targets. The individual target and reference PSF exposures should be in the form of 3D photometrically calibrated (`_calints`) products from calwebb_image2 processing. Each pipeline step will loop over the 3D stack of per-integration images contained in each exposure. The relevant steps are:\n", "\n", @@ -1288,25 +1299,26 @@ "\n", "# Boilerplate dictionary setup\n", "coron3dict = {}\n", - "coron3dict['outlier_detection'], coron3dict['stack_refs'], coron3dict['align_refs'], coron3dict['klip'], coron3dict['resample'] = {}, {}, {}, {}, {}\n", + "coron3dict['outlier_detection'], coron3dict['stack_refs'], coron3dict['align_refs'] = {}, {}, {}\n", + "coron3dict['klip'], coron3dict['resample'] = {}, {}\n", "\n", "# Set the maximum number of KL transform rows to keep when computing the PSF fit to the target.\n", - "coron3dict['klip']['truncate'] = 25 # The maximum number of KL modes to use.\n", + "coron3dict['klip']['truncate'] = 25 # The maximum number of KL modes to use.\n", "\n", "# Overrides for various reference files\n", "# Files should be in the base local directory or provide full path\n", - "#coron3dict['align_refs']['override_psfmask'] = 'myfile.fits' # The PSFMASK reference file \n", + "#coron3dict['align_refs']['override_psfmask'] = 'myfile.fits' # The PSFMASK reference file\n", "\n", "# Options for adjusting performance for the outlier detection step\n", "#coron3dict['outlier_detection']['kernel_size'] = '7 7' # Dial this to adjust the detector kernel size\n", "#coron3dict['outlier_detection']['threshold_percent'] = 99.8 # Dial this to be more/less aggressive in outlier flagging (values closer to 100% are less aggressive)\n", "\n", "# Options for adjusting the resample step\n", - "#coron3dict['resample']['pixfrac'] = 1.0 # Fraction by which input pixels are “shrunk” before being drizzled onto the output image grid \n", - "#coron3dict['resample']['kernel'] = 'square' # Kernel form used to distribute flux onto the output image \n", + "#coron3dict['resample']['pixfrac'] = 1.0 # Fraction by which input pixels are “shrunk” before being drizzled onto the output image grid\n", + "#coron3dict['resample']['kernel'] = 'square' # Kernel form used to distribute flux onto the output image\n", "#coron3dict['resample']['fillval'] = 'INDEF' # Value to assign to output pixels that have zero weight or do not receive any flux from any input pixels during drizzling\n", - "#coron3dict['resample']['weight_type'] = 'ivm' # Weighting type for each input image. \n", - "#coron3dict['resample']['output_shape'] = None # \n", + "#coron3dict['resample']['weight_type'] = 'ivm' # Weighting type for each input image.\n", + "#coron3dict['resample']['output_shape'] = None \n", "#coron3dict['resample']['crpix'] = None\n", "#coron3dict['resample']['crval'] = None\n", "#coron3dict['resample']['rotation'] = None\n", @@ -1322,7 +1334,7 @@ "id": "86b1952f", "metadata": {}, "source": [ - "Define a function to create association files for Stage 3. " + "Define a function to create association files for Stage 3. It creates an association from a list of science exposures and a list of PSF reference exposures." ] }, { @@ -1332,15 +1344,15 @@ "metadata": {}, "outputs": [], "source": [ - "def writel3asn(scifiles, reffiles, asnfile, prodname):\n", + "def writel3asn(scifiles, ref_targ_files, asnfile, prodname):\n", " \"\"\"Create an association from a list of science exposures and a list of PSF reference exposures, \n", " intended for calwebb_coron3 processing.\n", - " \n", + "\n", " Parameters\n", " ----------\n", " scifiles : list\n", " List of science files\n", - " reffiles : list\n", + " ref_targ_files : list\n", " List of reference files\n", " asnfile : str\n", " The path to the association file.\n", @@ -1348,10 +1360,10 @@ " # Define the basic association of science files\n", " asn = afl.asn_from_list(scifiles, rule=DMS_Level3_Base, product_name=prodname)\n", "\n", - " # Add reference files to the association\n", - " nref = len(reffiles)\n", + " # Add reference target files to the association\n", + " nref = len(ref_targ_files)\n", " for ii in range(0, nref):\n", - " asn['products'][0]['members'].append({'expname': reffiles[ii], 'exptype': 'psf'})\n", + " asn['products'][0]['members'].append({'expname': ref_targ_files[ii], 'exptype': 'psf'})\n", "\n", " # Write the association to a json file\n", " _, serialized = asn.dump()\n", @@ -1380,23 +1392,19 @@ "r1_calfiles = sorted(glob.glob(sstring))\n", "r2_calfiles = sorted(glob.glob(sstring2))\n", "calfiles = r1_calfiles + r2_calfiles\n", - "for ii in range(0, len(calfiles)):\n", - " calfiles[ii] = os.path.abspath(calfiles[ii])\n", - "calfiles = np.array(calfiles)\n", + "\n", "# Check that these are the mask/filter to use\n", "calfiles = select_mask_filter_files(calfiles, use_mask, use_filter)\n", "\n", - "# Reference Files need the calints.fits files\n", - "sstring = os.path.join(image2_ref_dir, 'jw*mirimage*calints.fits')\n", - "reffiles = sorted(glob.glob(sstring))\n", - "for ii in range(0, len(reffiles)):\n", - " reffiles[ii] = os.path.abspath(reffiles[ii])\n", - "reffiles = np.array(reffiles)\n", + "# Reference target Files need the calints.fits files\n", + "sstring = os.path.join(image2_ref_targ_dir, 'jw*mirimage*calints.fits')\n", + "ref_targ_files = sorted(glob.glob(sstring))\n", + "\n", "# Check that these are the mask/filter to use\n", - "reffiles = select_mask_filter_files(reffiles, use_mask, use_filter)\n", + "ref_targ_files = select_mask_filter_files(ref_targ_files, use_mask, use_filter)\n", "\n", "print('Found ' + str(len(calfiles)) + ' science files to process')\n", - "print('Found ' + str(len(reffiles)) + ' reference PSF files to process')" + "print('Found ' + str(len(ref_targ_files)) + ' reference PSF files to process')" ] }, { @@ -1416,7 +1424,7 @@ "source": [ "if docoron3:\n", " asnfile = os.path.join(coron3_dir, 'l3asn.json')\n", - " writel3asn(calfiles, reffiles, asnfile, 'Level 3')\n", + " writel3asn(calfiles, ref_targ_files, asnfile, 'Level 3')\n", " Coron3Pipeline.call(asnfile, steps=coron3dict, save_results=True, output_dir=coron3_dir)\n", "else:\n", " print('Skipping coron3 processing')" @@ -1456,7 +1464,7 @@ "id": "2966d5ff", "metadata": {}, "source": [ - "8.-Examine the output\n", + "## 8.-Examine the output\n", "------------------\n", "Here we'll plot the spectra to see what our source looks like." ] @@ -1470,7 +1478,7 @@ "source": [ "imgs = {'roll1': datamodels.open(\"./miri_coro_demo_data/stage3/jw01386008001_04101_00001_mirimage_a3001_psfsub.fits\").data.copy(),\n", " 'roll2': datamodels.open(\"./miri_coro_demo_data/stage3/jw01386009001_04101_00001_mirimage_a3001_psfsub.fits\").data.copy(),\n", - " 'combo': datamodels.open(\"./miri_coro_demo_data/stage3/Level 3_i2d.fits\").data.copy()}\n" + " 'combo': datamodels.open(\"./miri_coro_demo_data/stage3/Level 3_i2d.fits\").data.copy()}" ] }, { @@ -1533,7 +1541,7 @@ "metadata": {}, "outputs": [], "source": [ - "fig, ax = plt.subplots(1, 1, subplot_kw={'projection':wcs})\n", + "fig, ax = plt.subplots(1, 1, subplot_kw={'projection': wcs})\n", "vmin, vmax = np.nanquantile(imgs['combo'], [0.01, 0.99])\n", "ax.imshow(imgs['combo'], vmin=vmin, vmax=vmax)\n", "ax.scatter(*wcs.world_to_pixel(starcoord),\n", @@ -1560,7 +1568,7 @@ ], "metadata": { "kernelspec": { - "display_name": "Python 3 (ipykernel)", + "display_name": "miri-coron-new", "language": "python", "name": "python3" }, @@ -1574,7 +1582,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.10" + "version": "3.12.9" } }, "nbformat": 4, diff --git a/notebooks/MIRI/Coronagraphy/requirements.txt b/notebooks/MIRI/Coronagraphy/requirements.txt index 9c4a9c9..d335f98 100644 --- a/notebooks/MIRI/Coronagraphy/requirements.txt +++ b/notebooks/MIRI/Coronagraphy/requirements.txt @@ -1,5 +1,3 @@ -numpy<2.0 -jwst==1.15.1 +jwst==1.17.1 astroquery -jupyter -gwcs==0.21.0 \ No newline at end of file +jupyter \ No newline at end of file