From fe0ecb08ec1b2ab5f39834776e8b9b1261dd2d99 Mon Sep 17 00:00:00 2001 From: Rachel Cooper Date: Wed, 26 Feb 2025 18:16:00 -0500 Subject: [PATCH 1/6] first commit of working AMI notebook --- .../NIRISS/AMI/JWPipeNB-niriss-ami.ipynb | 1764 +++++++++++++++++ 1 file changed, 1764 insertions(+) create mode 100755 notebooks/NIRISS/AMI/JWPipeNB-niriss-ami.ipynb diff --git a/notebooks/NIRISS/AMI/JWPipeNB-niriss-ami.ipynb b/notebooks/NIRISS/AMI/JWPipeNB-niriss-ami.ipynb new file mode 100755 index 0000000..8a0a0f5 --- /dev/null +++ b/notebooks/NIRISS/AMI/JWPipeNB-niriss-ami.ipynb @@ -0,0 +1,1764 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "095a295c", + "metadata": {}, + "source": [ + "\"stsci_logo\" " + ] + }, + { + "cell_type": "markdown", + "id": "0393e357-9d9d-4516-b28d-4d335fad33a0", + "metadata": {}, + "source": [ + "##### NIRISS AMI Pipeline Notebook\n", + "\n", + "**Authors**: R. Cooper
\n", + "**Last Updated**: January 1, 2025
\n", + "**Pipeline Version**: 1.17.0 (Build 11.2)" + ] + }, + { + "cell_type": "markdown", + "id": "7da2029f", + "metadata": {}, + "source": [ + "# **Purpose**:\n", + "\n", + "This notebook provides a framework for processing Near-Infrared\n", + "Imager and Slitless Spectrograph (NIRISS) Aperture Masking Interferometry (AMI) data through all\n", + "three James Webb Space Telescope (JWST) pipeline stages. Data is assumed\n", + "to be located in one observation folder according to paths set up below.\n", + "It should not be necessary to edit any cells other than in the\n", + "[Configuration](#1.-Configuration) section unless modifying the standard\n", + "pipeline processing options.\n", + "\n", + "**Data**:\n", + "This notebook uses an example dataset from\n", + "[Program ID](https://www.stsci.edu/jwst/science-execution/program-information)\n", + "1093 (PI: Thatte) which is the AMI commissioning program.\n", + "\n", + "Example input data to use will be downloaded automatically unless\n", + "disabled (i.e., to use local files instead).\n", + "\n", + "**JWST pipeline version and CRDS context** This notebook was written for the\n", + "calibration pipeline version given above. It sets the CRDS context\n", + "to use the most recent version available in the JWST Calibration\n", + "Reference Data System (CRDS). If you use different pipeline versions or\n", + "CRDS context, please read the relevant release notes\n", + "([here for pipeline](https://github.com/spacetelescope/jwst),\n", + "[here for CRDS](https://jwst-crds.stsci.edu/)) for possibly relevant\n", + "changes.
\n", + "\n", + "**Updates**:\n", + "This notebook is regularly updated as improvements are made to the\n", + "pipeline. Find the most up to date version of this notebook at:\n", + "https://github.com/spacetelescope/jwst-pipeline-notebooks/\n", + "\n", + "**Recent Changes**:
\n", + "Someday: original notebook released
\n", + "

" + ] + }, + { + "cell_type": "markdown", + "id": "453945c7", + "metadata": {}, + "source": [ + "
" + ] + }, + { + "cell_type": "markdown", + "id": "5291c2f1", + "metadata": {}, + "source": [ + "\n", + "## Table of Contents\n", + "1. [Configuration](#1.-Configuration) \n", + "2. [Package Imports](#2.-Package-Imports)\n", + "3. [Demo Mode Setup (ignore if not using demo data)](#3.-Demo-Mode-Setup-(ignore-if-not-using-demo-data))\n", + "4. [Directory Setup](#4.-Directory-Setup)\n", + "5. [Detector 1 Pipeline](#5.-Detector1-Pipeline)\n", + "6. [Image2 Pipeline](#6.-Image2-Pipeline)\n", + "7. [AMI3 Pipeline](#7.-AMI3-Pipeline)\n", + "8. [(Optional) Run AMI3 steps independently](#8.-(Optional)-Run-AMI3-steps-independently)\n", + "9. [Visualize the data](#8.-Visualize-the-drizzle-combined-image)" + ] + }, + { + "cell_type": "markdown", + "id": "e5a88584", + "metadata": {}, + "source": [ + "
" + ] + }, + { + "cell_type": "markdown", + "id": "43bd07d7", + "metadata": {}, + "source": [ + "## 1. Configuration\n", + "------------------\n", + "Set basic configuration for running notebook." + ] + }, + { + "cell_type": "markdown", + "id": "026a649d-a0d2-4e3f-ab6d-cf8c26e6ce1d", + "metadata": {}, + "source": [ + "#### Install dependencies and parameters" + ] + }, + { + "cell_type": "markdown", + "id": "84b6c02c-a7cd-4999-96a0-87ce398d7e9a", + "metadata": {}, + "source": [ + "To make sure that the pipeline version is compatabile with the steps\n", + "discussed below and the required dependencies and packages are installed,\n", + "you can create a fresh conda environment and install the provided\n", + "`requirements.txt` file:\n", + "```\n", + "conda create -n niriss_ami_pipeline python=3.11\n", + "conda activate niriss_ami_pipeline\n", + "pip install -r requirements.txt\n", + "```\n", + "\n", + "Set the basic parameters to use with this notebook. These will affect\n", + "what data is used, where data is located (if already in disk), and\n", + "pipeline modules run in this data. The list of parameters are:\n", + "\n", + "* demo_mode\n", + "* directories with data\n", + "* pipeline modules" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "06ea414c", + "metadata": {}, + "outputs": [], + "source": [ + "# Basic import necessary for configuration\n", + "import os" + ] + }, + { + "cell_type": "markdown", + "id": "61bb14e6", + "metadata": {}, + "source": [ + "
\n", + "Note that demo_mode must be set appropriately below.\n", + "
\n", + "\n", + "Set demo_mode = True to run in demonstration mode. In this\n", + "mode this notebook will download example data from the Barbara A.\n", + "Mikulski Archive for Space Telescopes (MAST) and process it through the\n", + "pipeline. This will all happen in a local directory unless modified\n", + "in [Section 3](#3.-Demo-Mode-Setup-(ignore-if-not-using-demo-data))\n", + "below.\n", + "\n", + "Set demo_mode = False if you want to process your own data\n", + "that has already been downloaded and provide the location of the data.
" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "323f87d0", + "metadata": {}, + "outputs": [], + "source": [ + "# Set parameters for demo_mode, channel, band, data mode directories, and \n", + "# processing steps.\n", + "\n", + "# -----------------------------Demo Mode---------------------------------\n", + "demo_mode = True\n", + "\n", + "if demo_mode:\n", + " print('Running in demonstration mode using online example data!')\n", + "\n", + "# --------------------------User Mode Directories------------------------\n", + "# If demo_mode = False, look for user data in these paths\n", + "if not demo_mode:\n", + " # Set directory paths for processing specific data; these will need\n", + " # to be changed to your local directory setup (below are given as\n", + " # examples)\n", + " user_home_dir = os.path.expanduser('~')\n", + "\n", + " # Point to where science observation data are\n", + " # Assumes uncalibrated data in sci_dir/uncal/ and results in stage1,\n", + " # stage2, stage3 directories\n", + " sci_dir = os.path.join(user_home_dir, 'JWSTData/PID_1093/')\n", + "\n", + "# --------------------------Set Processing Steps--------------------------\n", + "# Individual pipeline stages can be turned on/off here. Note that a later\n", + "# stage won't be able to run unless data products have already been\n", + "# produced from the prior stage.\n", + "\n", + "# Science processing\n", + "dodet1 = False # calwebb_detector1\n", + "doimage2 = False # calwebb_image2\n", + "doami3 = False # calwebb_ami3\n", + "doami3_independent = False # Run ami_analyze, ami_normalize separately\n", + "doviz = True # Visualize calwebb_ami3 output" + ] + }, + { + "cell_type": "markdown", + "id": "1070079a", + "metadata": {}, + "source": [ + "### Set CRDS context and server\n", + "Before importing CRDS and JWST modules, we need\n", + "to configure our environment. This includes defining a CRDS cache\n", + "directory in which to keep the reference files that will be used by the\n", + "calibration pipeline.\n", + "\n", + "If the root directory for the local CRDS cache directory has not been set\n", + "already, it will be set to create one in the home directory." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "baf7070d", + "metadata": {}, + "outputs": [], + "source": [ + "# ------------------------Set CRDS context and paths----------------------\n", + "\n", + "# Set CRDS context (if overriding to use a specific version of reference\n", + "# files; leave commented out to use latest reference files by default)\n", + "#%env CRDS_CONTEXT jwst_1254.pmap\n", + "\n", + "# Check whether the local CRDS cache directory has been set.\n", + "# If not, set it to the user home directory\n", + "if (os.getenv('CRDS_PATH') is None):\n", + " os.environ['CRDS_PATH'] = os.path.join(os.path.expanduser('~'), 'crds')\n", + "# Check whether the CRDS server URL has been set. If not, set it.\n", + "if (os.getenv('CRDS_SERVER_URL') is None):\n", + " os.environ['CRDS_SERVER_URL'] = 'https://jwst-crds.stsci.edu'\n", + "\n", + "# Echo CRDS path in use\n", + "print(f\"CRDS local filepath: {os.environ['CRDS_PATH']}\")\n", + "print(f\"CRDS file server: {os.environ['CRDS_SERVER_URL']}\")" + ] + }, + { + "cell_type": "markdown", + "id": "165cb98d", + "metadata": {}, + "source": [ + "
" + ] + }, + { + "cell_type": "markdown", + "id": "575588c8", + "metadata": {}, + "source": [ + "## 2. Package Imports" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4415f6d9", + "metadata": {}, + "outputs": [], + "source": [ + "# Use the entire available screen width for this notebook\n", + "from IPython.display import display, HTML\n", + "display(HTML(\"\"))" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "fb9f7f54-ff98-428b-9fa9-b39c50210c3a", + "metadata": {}, + "outputs": [], + "source": [ + "# Basic system utilities for interacting with files\n", + "# ----------------------General Imports------------------------------------\n", + "import glob\n", + "import time\n", + "from pathlib import Path\n", + "from collections import defaultdict\n", + "#import urllib.request\n", + "\n", + "# Numpy for doing calculations\n", + "import numpy as np\n", + "\n", + "# -----------------------Astroquery Imports--------------------------------\n", + "# ASCII files, and downloading demo files\n", + "from astroquery.mast import Observations\n", + "\n", + "# For visualizing data\n", + "import matplotlib.pyplot as plt\n", + "from astropy.visualization import (MinMaxInterval, SqrtStretch,\n", + " ImageNormalize)\n", + "\n", + "# For optional input argument overrides\n", + "import asdf\n", + "from synphot import SourceSpectrum\n", + "import stsynphot as stsyn\n", + "\n", + "# for JWST calibration pipeline\n", + "import jwst\n", + "import crds\n", + "\n", + "from jwst.pipeline import Detector1Pipeline\n", + "from jwst.pipeline import Image2Pipeline\n", + "from jwst.pipeline import Ami3Pipeline\n", + "from jwst.ami import AmiAnalyzeStep, AmiNormalizeStep, utils\n", + "\n", + "# JWST pipeline utilities\n", + "from jwst import datamodels\n", + "from jwst.associations import asn_from_list # Tools for creating association files\n", + "from jwst.associations.lib.rules_level3_base import DMS_Level3_Base # Definition of a Lvl3 association file\n", + "\n", + "# Echo pipeline version and CRDS context in use\n", + "print(f\"JWST Calibration Pipeline Version: {jwst.__version__}\")\n", + "print(f\"Using CRDS Context: {crds.get_context_name('jwst')}\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "5844d476-f8b3-4c24-9c79-091d6016314d", + "metadata": {}, + "outputs": [], + "source": [ + "# Start a timer to keep track of runtime\n", + "time0 = time.perf_counter()" + ] + }, + { + "cell_type": "markdown", + "id": "987d3f75", + "metadata": {}, + "source": [ + "
" + ] + }, + { + "cell_type": "markdown", + "id": "7ef8050f", + "metadata": {}, + "source": [ + "## 3. Demo Mode Setup (ignore if not using demo data)\n", + "------------------\n", + "If running in demonstration mode, set up the program information to\n", + "retrieve the uncalibrated data automatically from MAST using\n", + "[astroquery](https://astroquery.readthedocs.io/en/latest/mast/mast.html).\n", + "MAST has a dedicated service for JWST data retrieval, so the archive can\n", + "be searched by instrument keywords rather than just filenames or proposal IDs.
\n", + "\n", + "The list of searchable keywords for filtered JWST MAST queries \n", + "is [here](https://mast.stsci.edu/api/v0/_jwst_inst_keywd.html).
\n", + "\n", + "For illustrative purposes, we will use a single target and reference star pair. Each exposure was taken in the [F480W filter](https://jwst-docs.stsci.edu/jwst-near-infrared-imager-and-slitless-spectrograph/niriss-instrumentation/niriss-filters) filter with the [non-redundant mask (NRM)](https://jwst-docs.stsci.edu/jwst-near-infrared-imager-and-slitless-spectrograph/niriss-instrumentation/niriss-non-redundant-mask) that enables AMI in the pupil.\n", + "\n", + "We will start with uncalibrated data products. The files are named\n", + "`jw010930nn001_03102_00001_nis_uncal.fits`, where *nn* refers to the\n", + "observation number: in this case, observation 12 for the target and \n", + "observation 15 for the reference star.\n", + "\n", + "More information about the JWST file naming conventions can be found at:\n", + "https://jwst-pipeline.readthedocs.io/en/latest/jwst/data_products/file_naming.html" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "8d595f1e-2590-4f01-bb92-13bb43a22b3c", + "metadata": {}, + "outputs": [], + "source": [ + "# Set up the program information and paths for demo program\n", + "if demo_mode:\n", + " print('Running in demonstration mode and will download example data from MAST!')\n", + " data_dir = os.path.join('.', 'nis_ami_demo_data')\n", + " sci_dir = os.path.join(data_dir, 'pipeline_output')\n", + " uncal_dir = os.path.join(sci_dir, 'uncal')\n", + "\n", + " # Ensure filepaths for input data exist\n", + " if not os.path.exists(uncal_dir):\n", + " os.makedirs(uncal_dir)\n", + " \n", + " # Create directory if it does not exist\n", + " if not os.path.isdir(data_dir):\n", + " os.mkdir(data_dir)" + ] + }, + { + "cell_type": "markdown", + "id": "6674a8b8", + "metadata": {}, + "source": [ + "Identify list of science (SCI) uncalibrated files associated with visits." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a4b22375", + "metadata": {}, + "outputs": [], + "source": [ + "# Obtain a list of observation IDs for the specified demo program\n", + "if demo_mode:\n", + " # Science data\n", + " program = '01093'\n", + " sci_obs_id_table = Observations.query_criteria(instrument_name=[\"NIRISS/AMI\"],\n", + " proposal_id=[program],\n", + " filters=['F480M;NRM'], # Data for Specific Filter\n", + " obs_id=['jw' + program + '*']\n", + " )" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a666350e", + "metadata": { + "scrolled": true + }, + "outputs": [], + "source": [ + "# Turn the list of visits into a list of uncalibrated data files\n", + "if demo_mode:\n", + " # Define types of files to select\n", + " file_dict = {'uncal': {'product_type': 'SCIENCE',\n", + " 'productSubGroupDescription': 'UNCAL',\n", + " 'calib_level': [1]}}\n", + "\n", + " # Science files\n", + " sci_files = []\n", + " # Loop over visits identifying uncalibrated files that are associated\n", + " # with them\n", + " for exposure in (sci_obs_id_table):\n", + " products = Observations.get_product_list(exposure)\n", + " for filetype, query_dict in file_dict.items():\n", + " filtered_products = Observations.filter_products(products, productType=query_dict['product_type'],\n", + " productSubGroupDescription=query_dict['productSubGroupDescription'],\n", + " calib_level=query_dict['calib_level'])\n", + " sci_files.extend(filtered_products['dataURI'])\n", + " \n", + " # Select only the exposures we want to use based on filename\n", + " sci_observtn = ['012','015']\n", + " visit = '001'\n", + " visitgroup = '03'\n", + " seq_id = \"1\"\n", + " act_id = '02'\n", + " expnum = '00001'\n", + "\n", + " filestrings = ['jw'+program+sciobs+visit+'_'+visitgroup+seq_id+act_id+'_'+expnum for sciobs in sci_observtn]\n", + " #sci_files_to_download = []\n", + " \n", + "\n", + " sci_files_to_download = [scifile for scifile in sci_files if any(filestr in scifile for filestr in filestrings)]\n", + " \n", + " sci_files_to_download = sorted(set(sci_files_to_download))\n", + " print(f\"Science files selected for downloading: {len(sci_files_to_download)}\")\n" + ] + }, + { + "cell_type": "markdown", + "id": "c36a54be", + "metadata": {}, + "source": [ + "Download all the uncal files and place them into the appropriate\n", + "directories.\n", + "\n", + "
\n", + "Warning: If this notebook is halted during this step the downloaded file\n", + "may be incomplete, and cause crashes later on!\n", + "
" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "764fa682", + "metadata": {}, + "outputs": [], + "source": [ + "if demo_mode:\n", + " for filename in sci_files_to_download:\n", + " sci_manifest = Observations.download_file(filename,\n", + " local_path=os.path.join(uncal_dir, Path(filename).name))" + ] + }, + { + "cell_type": "markdown", + "id": "5b14864b", + "metadata": {}, + "source": [ + "
" + ] + }, + { + "cell_type": "markdown", + "id": "a6c51254-6295-4f98-bc25-d07300e0d8f4", + "metadata": {}, + "source": [ + "## 4. Directory Setup\n", + "---------------------\n", + "Set up detailed paths to input/output stages here." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "bffbedc3-fbe3-4c56-ad18-85b5d0c7d880", + "metadata": {}, + "outputs": [], + "source": [ + "# Define output subdirectories to keep science data products organized\n", + "uncal_dir = os.path.join(sci_dir, 'uncal') # Uncalibrated pipeline inputs should be here\n", + "det1_dir = os.path.join(sci_dir, 'stage1') # calwebb_detector1 pipeline outputs will go here\n", + "image2_dir = os.path.join(sci_dir, 'stage2') # calwebb_image2 pipeline outputs will go here\n", + "ami3_dir = os.path.join(sci_dir, 'stage3') # calwebb_ami3 pipeline outputs will go here\n", + "ami3_dir_alt = os.path.join(sci_dir, 'stage3_alt') # outputs from running AMI steps independently will go here\n", + "\n", + "# We need to check that the desired output directories exist, and if not\n", + "# create them\n", + "if not os.path.exists(det1_dir):\n", + " os.makedirs(det1_dir)\n", + "if not os.path.exists(image2_dir):\n", + " os.makedirs(image2_dir)\n", + "if not os.path.exists(ami3_dir):\n", + " os.makedirs(ami3_dir)\n", + "if not os.path.exists(ami3_dir_alt):\n", + " os.makedirs(ami3_dir_alt)" + ] + }, + { + "cell_type": "markdown", + "id": "1cd8e6a7-433d-4e5b-a832-46f0bc96a0fe", + "metadata": {}, + "source": [ + "Look at the files to determine exposure parameters and practice using JWST datamodels:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f767a7fa-50cb-4411-8aef-a32b7fde8917", + "metadata": {}, + "outputs": [], + "source": [ + "uncal_files = sorted(glob.glob(os.path.join(uncal_dir, '*_uncal.fits')))\n", + "\n", + "# Open file as JWST datamodels\n", + "uncal_targ = datamodels.open(uncal_files[0])\n", + "uncal_ref = datamodels.open(uncal_files[1])\n", + "\n", + "for uncal_f in [uncal_targ, uncal_ref]:\n", + " # print file name\n", + " print(uncal_f.meta.filename)\n", + " \n", + " # Print out exposure info\n", + " print(f\"Instrument: {uncal_f.meta.instrument.name}\")\n", + " print(f\"Filter: {uncal_f.meta.instrument.filter}\")\n", + " print(f\"Pupil: {uncal_f.meta.instrument.pupil}\")\n", + " print(f\"Number of integrations: {uncal_f.meta.exposure.nints}\")\n", + " print(f\"Number of groups: {uncal_f.meta.exposure.ngroups}\")\n", + " print(f\"Readout pattern: {uncal_f.meta.exposure.readpatt}\")\n", + " print(f\"Dither position number: {uncal_f.meta.dither.position_number}\")\n", + " print(\"\\n\")" + ] + }, + { + "cell_type": "markdown", + "id": "0469a648-9bb1-40f9-9f8b-fb3a29bc5c7f", + "metadata": {}, + "source": [ + "From the above, we confirm that the data files are for the NIRISS instrument\n", + "using the `F480M` filter in the [Filter Wheel](https://jwst-docs.stsci.edu/jwst-near-infrared-imager-and-slitless-spectrograph/niriss-instrumentation/niriss-pupil-and-filter-wheels)\n", + "and the `NRM` in the Pupil Wheel. \n", + "\n", + "Both exposures use the [`NISRAPID` readout pattern](https://jwst-docs.stsci.edu/jwst-near-infrared-imager-and-slitless-spectrograph/niriss-instrumentation/niriss-detector-overview/niriss-detector-readout-patterns). The target has 5 groups per integration, and 69 integrations per exposure. The reference star has 12 groups per integration, and 61 integrations per exposure. They were taken at the same dither position; primary dither pattern position 1.\n", + "\n", + "For more information about how JWST exposures are defined by up-the-ramp sampling, see the\n", + "[Understanding Exposure Times JDox article](https://jwst-docs.stsci.edu/understanding-exposure-times).\n" + ] + }, + { + "cell_type": "markdown", + "id": "2700768a-a641-4241-b00d-66614f2c8bb9", + "metadata": {}, + "source": [ + "Display the last group of one integration of the target and reference star's `uncal` images:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "60792b8c-e160-4e2c-b899-4bbe8c3071f1", + "metadata": {}, + "outputs": [], + "source": [ + "# Image normalization (square root stretch)\n", + "norm1 = ImageNormalize(uncal_targ.data[0][-1], \n", + " interval=MinMaxInterval(), \n", + " stretch=SqrtStretch())\n", + "norm2 = ImageNormalize(uncal_ref.data[0][-1], \n", + " interval=MinMaxInterval(), \n", + " stretch=SqrtStretch())\n", + "\n", + "# Display the images\n", + "fig, axs = plt.subplots(1,2, figsize=(9,5))\n", + "axs[0].set_title('Target Uncal Data')\n", + "im1 = axs[0].imshow(uncal_targ.data[0][-1], norm=norm1)\n", + "axs[1].set_title('Ref Star Uncal Data')\n", + "im2 = axs[1].imshow(uncal_ref.data[0][-1], norm=norm2)\n", + "\n", + "for im in [im1,im2]:\n", + " cbar = plt.colorbar(im, shrink=.9,location='bottom', pad=.05)\n", + " cbar.ax.tick_params(axis='x', rotation=45)\n", + "for ax in axs:\n", + " ax.axis('off')\n", + "plt.tight_layout()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4c2a4925-e5bf-4d48-8843-68960f7bae9e", + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f843ff7f-58b6-4294-bfaa-606c29abc524", + "metadata": {}, + "outputs": [], + "source": [ + "# Print out the time benchmark\n", + "time1 = time.perf_counter()\n", + "print(f\"Runtime so far: {time1 - time0:0.0f} seconds\")" + ] + }, + { + "cell_type": "markdown", + "id": "97528f9b", + "metadata": {}, + "source": [ + "
" + ] + }, + { + "cell_type": "markdown", + "id": "267e28e0-a63f-4790-b8a0-3ad4bff5a167", + "metadata": {}, + "source": [ + "## 5. Detector1 Pipeline\n", + "Run the datasets through the\n", + "[Detector1](https://jwst-docs.stsci.edu/jwst-science-calibration-pipeline-overview/stages-of-jwst-data-processing/calwebb_detector1)\n", + "stage of the pipeline to apply detector level calibrations and create a\n", + "countrate data product where slopes are fitted to the integration ramps.\n", + "These `*_rateints.fits` products are 3D (nintegrations x nrows x ncols)\n", + "and contain the fitted ramp slopes for each integration.\n", + "2D countrate data products (`*_rate.fits`) are also\n", + "created (nrows x ncols) which have been averaged over all\n", + "integrations. the fitted ramp slopes\n", + "for each integration.\n", + "\n", + "By default, all steps in the Detector1 stage of the pipeline are run for\n", + "NIRISS except: the `ipc` correction step and the `gain_scale` step. Note\n", + "that while the [`persistence` step](https://jwst-pipeline.readthedocs.io/en/latest/jwst/persistence/description.html)\n", + "is set to run by default, this step does not automatically correct the\n", + "science data for persistence. The `persistence` step creates a\n", + "`*_trapsfilled.fits` file which is a model that records the number\n", + "of traps filled at each pixel at the end of an exposure. This file would be\n", + "used as an input to the `persistence` step, via the `input_trapsfilled`\n", + "argument, to correct a science exposure for persistence. Since persistence\n", + "is not well calibrated for NIRISS, we do not perform a persistence\n", + "correction and thus turn off this step to speed up calibration and to not\n", + "create files that will not be used in the subsequent analysis. This step\n", + "can be turned off when running the pipeline in Python by doing:\n", + "```\n", + "rate_result = Detector1Pipeline.call(uncal,steps={'persistence': {'skip': True}})\n", + "```\n", + "or as indicated in the cell bellow using a dictionary.\n", + "\n", + "The [charge_migration step](https://jwst-pipeline.readthedocs.io/en/latest/jwst/charge_migration/index.html#charge-migration-step)\n", + "is particularly important for NIRISS images to mitigate apparent flux loss\n", + "in resampled images due to the spilling of charge from a central pixel into\n", + "its neighboring pixels (see [Goudfrooij et al. 2023](https://ui.adsabs.harvard.edu/abs/2023arXiv231116301G/abstract)\n", + "for details). Charge migration occurs when the accumulated charge in a\n", + "central pixel exceeds a certain signal limit, which is ~25,000 ADU. This\n", + "step is turned on by default for NIRISS imaging mode when using CRDS\n", + "contexts of `jwst_1159.pmap` or later. Different signal limits for each filter are provided by the\n", + "[pars-chargemigrationstep parameter files](https://jwst-crds.stsci.edu).\n", + "Users can specify a different signal limit by running this step with the\n", + "`signal_threshold` flag and entering another signal limit in units of ADU.\n", + "The effect is stronger when there is high contrast between a bright pixel and neighboring faint pixel,\n", + "as is the case for the strongly peaked AMI PSF.\n", + "\n", + "For AMI mode, preliminary investigation shows that dark subtraction does not improve calibration,\n", + "and may in fact have a detrimental effect, so we turn it off here. \n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "3592ee58-b424-4bd6-973b-88fd081b81d4", + "metadata": {}, + "outputs": [], + "source": [ + "# Set up a dictionary to define how the Detector1 pipeline should be configured\n", + "\n", + "# Boilerplate dictionary setup\n", + "det1dict = defaultdict(dict)\n", + "\n", + "# Step names are copied here for reference\n", + "det1_steps = ['group_scale','dq_init','saturation','ipc','superbias','refpix',\n", + " 'linearity','persistence','dark_current','charge_migration',\n", + " 'jump','ramp_fit','gain_scale']\n", + "\n", + "# Overrides for whether or not certain steps should be skipped\n", + "# skipping the ipc, persistence, and dark steps\n", + "det1dict['ipc']['skip'] = True\n", + "det1dict['persistence']['skip'] = True\n", + "det1dict['dark_current']['skip'] = True\n", + "\n", + "# Overrides for various reference files\n", + "# Files should be in the base local directory or provide full path\n", + "#det1dict['dq_init']['override_mask'] = 'myfile.fits' # Bad pixel mask\n", + "#det1dict['saturation']['override_saturation'] = 'myfile.fits' # Saturation\n", + "#det1dict['reset']['override_reset'] = 'myfile.fits' # Reset\n", + "#det1dict['linearity']['override_linearity'] = 'myfile.fits' # Linearity\n", + "#det1dict['rscd']['override_rscd'] = 'myfile.fits' # RSCD\n", + "#det1dict['dark_current']['override_dark'] = 'myfile.fits' # Dark current subtraction\n", + "#det1dict['jump']['override_gain'] = 'myfile.fits' # Gain used by jump step\n", + "#det1dict['ramp_fit']['override_gain'] = 'myfile.fits' # Gain used by ramp fitting step\n", + "#det1dict['jump']['override_readnoise'] = 'myfile.fits' # Read noise used by jump step\n", + "#det1dict['ramp_fit']['override_readnoise'] = 'myfile.fits' # Read noise used by ramp fitting step\n", + "\n", + "# Turn on multi-core processing (off by default). Choose what fraction of cores to use (quarter, half, or all)\n", + "det1dict['jump']['maximum_cores'] = 'half'\n", + "\n", + "# Alter parameters of certain steps (example)\n", + "#det1dict['charge_migration']['signal_threshold'] = X" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "0f8f107b-5f80-4674-bc34-2d88791e4409", + "metadata": { + "scrolled": true + }, + "outputs": [], + "source": [ + "# Run Detector1 stage of pipeline, specifying:\n", + "# output directory to save *_rateints.fits files\n", + "# save_results flag set to True so the *rateints.fits files are saved\n", + "# save_calibrated_ramp set to True so *ramp.fits files are saved\n", + "\n", + "if dodet1:\n", + " for uncal in uncal_files:\n", + " rate_result = Detector1Pipeline.call(uncal, output_dir=det1_dir, steps=det1dict, save_results=True, save_calibrated_ramp=True)\n", + "else:\n", + " print('Skipping Detector1 processing')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e4cc2b48-7ec7-4b59-b3ee-2dfdf0782c01", + "metadata": {}, + "outputs": [], + "source": [ + "# Print out the time benchmark\n", + "time1 = time.perf_counter()\n", + "print(f\"Runtime for Detector1: {time1 - time0:0.0f} seconds\")" + ] + }, + { + "cell_type": "markdown", + "id": "7fc5520a-6097-482b-b3e0-a6bf94cf7af3", + "metadata": {}, + "source": [ + "### Exploring the data\n", + "\n", + "Identify `*_rateints.fits` files and verify which pipeline steps were run and\n", + "which calibration reference files were applied.\n", + "\n", + "The header contains information about which calibration steps were\n", + "completed and skipped and which reference files were used to process the\n", + "data." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "1c122766-a545-4042-93e8-f68c18f62fdf", + "metadata": {}, + "outputs": [], + "source": [ + "if dodet1:\n", + " # find rateints files\n", + " rateints_files = sorted(glob.glob(os.path.join(det1_dir, '*_rateints.fits')))\n", + "\n", + " # Read in file as datamodel\n", + " rateints_targ = datamodels.open(rateints_files[0])\n", + " rateints_ref = datamodels.open(rateints_files[1])\n", + "\n", + " # Check which steps were run\n", + " for step, status in rateints_targ.meta.cal_step.instance.items():\n", + " print(f\"{step}: {status}\")\n" + ] + }, + { + "cell_type": "markdown", + "id": "2f2396cc-939f-44a6-878c-b33955796da8", + "metadata": {}, + "source": [ + "Check which CRDS version and reference files were used to calibrate the dataset:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c525e133-d875-4ea7-9552-d7f0df05747c", + "metadata": {}, + "outputs": [], + "source": [ + "if dodet1:\n", + " for key, val in rateints_targ.meta.ref_file.instance.items():\n", + " print(f\"{key}:\")\n", + " for param in rateints_targ.meta.ref_file.instance[key]:\n", + " # for subkey, subval in key[val].items():\n", + " print(f\"\\t{rateints_targ.meta.ref_file.instance[key][param]}\")" + ] + }, + { + "cell_type": "markdown", + "id": "7a669ad6-52b9-421d-a8ab-92b2aa4aad13", + "metadata": {}, + "source": [ + "Display the first integration of the target and reference star's `rateints` images:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "dba7df99-1af4-49e1-991d-720782b4a074", + "metadata": {}, + "outputs": [], + "source": [ + "if dodet1:\n", + " norm1 = ImageNormalize(rateints_targ.data[0], \n", + " interval=MinMaxInterval(), \n", + " stretch=SqrtStretch())\n", + " norm2 = ImageNormalize(rateints_ref.data[0], \n", + " interval=MinMaxInterval(), \n", + " stretch=SqrtStretch())\n", + " fig, axs = plt.subplots(1,2, figsize=(9,5))\n", + " axs[0].set_title('Target Rateints Data')\n", + " im1 = axs[0].imshow(rateints_targ.data[0], norm=norm1)\n", + " axs[1].set_title('Ref Star Rateints Data')\n", + " im2 = axs[1].imshow(rateints_ref.data[0], norm=norm2)\n", + " \n", + " for im in [im1,im2]:\n", + " cbar = plt.colorbar(im, shrink=.9,location='bottom', pad=.05)\n", + " cbar.ax.tick_params(axis='x', rotation=45)\n", + " for ax in axs:\n", + " ax.axis('off')\n", + " plt.tight_layout()" + ] + }, + { + "cell_type": "markdown", + "id": "ae80024a-c9e6-4c8d-85f9-8718f3800e38", + "metadata": {}, + "source": [ + "The white pixels in the images above are those flagged as DO_NOT_USE by the pipeline, so their pixel values in the image are replaced with `NaN` values." + ] + }, + { + "cell_type": "markdown", + "id": "b06450da", + "metadata": {}, + "source": [ + "
" + ] + }, + { + "cell_type": "markdown", + "id": "350808ee-9579-4cb5-8f02-a74904c91449", + "metadata": {}, + "source": [ + "## 6. Image2 Pipeline \n", + "\n", + "In the [Image2 stage of the pipeline](https://jwst-pipeline.readthedocs.io/en/latest/jwst/pipeline/calwebb_image2.html),\n", + "calibrated data products are created (`*_cal.fits` or\n", + "`*_calints.fits` files, depending on whether the input files are\n", + "`*_rate.fits` or `*_rateints.fits`).\n", + "\n", + "In this pipeline processing stage, the data are [flat fielded](https://jwst-pipeline.readthedocs.io/en/latest/jwst/flatfield/index.html#flatfield-step). For AMI, we do not perform any of the other `Image2` steps such as photometric calibration, so our output data products still have units of countrate (ADU/s)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "3a091817-7a3c-4a60-a914-f9ac54d36e56", + "metadata": {}, + "outputs": [], + "source": [ + "time_image2 = time.perf_counter()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "7b00fda2-a7c0-445a-bf23-0d9b4f050419", + "metadata": {}, + "outputs": [], + "source": [ + "# Set up a dictionary to define how the Image2 pipeline should be configured.\n", + "\n", + "# Boilerplate dictionary setup\n", + "image2dict = defaultdict(dict)\n", + "\n", + "image2steps = ['assign_wcs','flat_field','photom','resample']\n", + "\n", + "# Overrides for whether or not certain steps should be skipped (example)\n", + "image2dict['photom']['skip'] = True\n", + "image2dict['resample']['skip'] = True\n", + "\n", + "# Overrides for various reference files\n", + "# Files should be in the base local directory or provide full path\n", + "#image2dict['flat_field']['override_flat'] = 'myfile.fits' # Pixel flatfield" + ] + }, + { + "cell_type": "markdown", + "id": "247170cd-a3ee-4d9d-b8b3-f930727f8abc", + "metadata": {}, + "source": [ + "Find and sort the input files, ensuring use of absolute paths:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "d19cd4e6-cc38-4e26-8e8b-7fb143856940", + "metadata": {}, + "outputs": [], + "source": [ + "# Use files from the detector1 output folder\n", + "rateints_files = sorted(glob.glob(os.path.join(det1_dir, 'jw*rateints.fits')))\n", + "for ii in range(len(rateints_files)):\n", + " rateints_files[ii] = os.path.abspath(rateints_files[ii])\n", + "\n", + "print(f\"Found {str(len(rateints_files))} science files\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "81d916e5-dd5d-472d-9aff-b2b4c86decf8", + "metadata": { + "scrolled": true + }, + "outputs": [], + "source": [ + "# Run Image2 stage of pipeline, specifying:\n", + "# output directory to save *_calints.fits files\n", + "# save_results flag set to True so the calints files are saved\n", + "\n", + "if doimage2:\n", + " for rateints in rateints_files:\n", + " calints_result = Image2Pipeline.call(rateints, output_dir=image2_dir, steps=image2dict, save_results=True)\n", + "else:\n", + " print(\"Skipping Image2 processing.\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "3d58b94d-e1d7-4b73-b598-756c99d81174", + "metadata": {}, + "outputs": [], + "source": [ + "# Print out the time benchmark\n", + "time1 = time.perf_counter()\n", + "print(f\"Runtime so far: {time1 - time0:0.0f} seconds\")\n", + "print(f\"Runtime for Image2: {time1 - time_image2:0.0f} seconds\")" + ] + }, + { + "cell_type": "markdown", + "id": "da57b7d0-95ef-4227-b730-6f54a3eaaa16", + "metadata": {}, + "source": [ + "Verify which pipeline steps were run:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "27390bbd-ae2d-49e7-977a-59d3765c0946", + "metadata": {}, + "outputs": [], + "source": [ + "if doimage2:\n", + " # Identify *_calints.fits files\n", + " calints_files = sorted(glob.glob(os.path.join(image2_dir, '*_calints.fits')))\n", + "\n", + " calints_targ = datamodels.open(calints_files[0])\n", + " calints_ref = datamodels.open(calints_files[1])\n", + "\n", + " # Check which steps were run:\n", + " for step, status in calints_targ.meta.cal_step.instance.items():\n", + " print(f\"{step}: {status}\")" + ] + }, + { + "cell_type": "markdown", + "id": "b77293ab-4eaa-4ebd-9f9b-bef8f6a2300f", + "metadata": {}, + "source": [ + "Check which reference files were used to calibrate the dataset:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "6544744b-804e-4583-8311-e47b5e16c7fb", + "metadata": { + "scrolled": true + }, + "outputs": [], + "source": [ + "if doimage2:\n", + " for key, val in calints_targ.meta.ref_file.instance.items():\n", + " print(f\"{key}:\")\n", + " for param in calints_targ.meta.ref_file.instance[key]:\n", + " # for subkey, subval in key[val].items():\n", + " print(f\"\\t{calints_targ.meta.ref_file.instance[key][param]}\")" + ] + }, + { + "cell_type": "markdown", + "id": "cabc81ac-0f89-4bc7-a7ee-2bd6810a12ab", + "metadata": {}, + "source": [ + "Display one integration of the target and reference star's `calints` images:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f100bd40-e427-4f49-88f8-1720482158ea", + "metadata": {}, + "outputs": [], + "source": [ + "if doimage2:\n", + " # Image normalization (square root stretch)\n", + " norm1 = ImageNormalize(calints_targ.data[0], \n", + " interval=MinMaxInterval(), \n", + " stretch=SqrtStretch())\n", + " norm2 = ImageNormalize(calints_ref.data[0], \n", + " interval=MinMaxInterval(), \n", + " stretch=SqrtStretch())\n", + " # Display the images\n", + " fig, axs = plt.subplots(1,2, figsize=(9,5))\n", + " axs[0].set_title('Target Calints Data')\n", + " im1 = axs[0].imshow(calints_targ.data[0], norm=norm1)\n", + " axs[1].set_title('Ref Star Calints Data')\n", + " im2 = axs[1].imshow(calints_ref.data[0], norm=norm2)\n", + " \n", + " for im in [im1,im2]:\n", + " cbar = plt.colorbar(im, shrink=.9,location='bottom', pad=.05)\n", + " cbar.ax.tick_params(axis='x', rotation=45)\n", + " for ax in axs:\n", + " ax.axis('off')\n", + " plt.tight_layout()" + ] + }, + { + "cell_type": "markdown", + "id": "3ef9155e", + "metadata": {}, + "source": [ + "
" + ] + }, + { + "cell_type": "markdown", + "id": "7cfbb244-7af9-4cbf-90d9-08598378c16a", + "metadata": {}, + "source": [ + "## 7. AMI3 Pipeline\n", + "\n", + "In the [AMI3 stage of the pipeline](https://jwst-pipeline.readthedocs.io/en/latest/jwst/pipeline/calwebb_ami3.html),\n", + "the target and reference star `*_calints.fits` files are analyzed to extract interferometric observables, and then the target's observables are normalized by those of the reference star to produce a final set of calibrated observables.\n", + "\n", + "In order to run the AMI3 stage, an [Association](https://jwst-pipeline.readthedocs.io/en/latest/jwst/associations/overview.html)\n", + "needs to be created to inform the pipeline which exposures should be treated as targets and which as reference stars.\n", + "\n", + "By default, the `AMI3` stage of the pipeline performs the following steps on NIRISS data:\n", + "* [AmiAnalyze](https://jwst-pipeline.readthedocs.io/en/latest/jwst/ami_analyze/description.html) - fits a model to each integration of the input image and computes interferometric observables (fringe phases, amplitudes, and derived quantities).\n", + "\n", + "* [AmiNormalize](https://jwst-pipeline.readthedocs.io/en/latest/jwst/ami_normalize/index.html) - normalizes the target's observables by the reference star observables.\n", + "\n", + "While a previous version of the AMI3 pipeline included an intermediate step to average together the observables from multiple exposures (`AmiAverage`), the step is not currently supported.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f8e81dc9-78dd-4bba-879c-9dfa2a4da69c", + "metadata": {}, + "outputs": [], + "source": [ + "time_ami3 = time.perf_counter()" + ] + }, + { + "cell_type": "markdown", + "id": "2c1523d2-f59b-4ddb-a78b-27e5da203a2e", + "metadata": {}, + "source": [ + "### Create ASDF Files \n", + "\n", + "Some optional input arguments to the `AmiAnalyze` step accept the name of an ASDF file containing a particular data tree. The default `bandpass` and `affine2d` arguments are currently generic. \n", + "\n", + "An example of creating and using such a file is shown here; an example of creating and using the bandpass ASDF files is shown below in " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "605f4b5a-6c1d-4804-9a57-993b46089094", + "metadata": {}, + "outputs": [], + "source": [ + "calints_files = sorted(glob.glob(os.path.join(image2_dir, '*_calints.fits')))\n", + "calints_f = datamodels.open(calints_files[0])" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "b12ea73d-b1d5-460a-9cec-b19adf444704", + "metadata": {}, + "outputs": [], + "source": [ + "# Example of writing an ASDF file specifying an affine distortion matrix to use\n", + "\n", + "# The default affine parameters are accessed with a special string, 'commissioning.'\n", + "# If affine2d = None, it will perform a search for the best-fit rotation only.\n", + "# To use a different affine distortion, such as one measured directly from the data or an ideal distortion,\n", + "# it must be specified in an ASDF file as shown here.\n", + "\n", + "# Create an ASDF file\n", + "asdf_affine2d = os.path.join(sci_dir,'affine2d_ideal.asdf')\n", + "\n", + "aff_tree = {\n", + "'mx': 1., # dimensionless x-magnification\n", + "'my': 1., # dimensionless y-magnification\n", + "'sx': 0., # dimensionless x shear\n", + "'sy': 0., # dimensionless y shear\n", + "'xo': 0., # x-offset in pupil space\n", + "'yo': 0., # y-offset in pupil space\n", + "'rotradccw': None }\n", + "\n", + "with open(asdf_affine2d, 'wb') as fh:\n", + " af = asdf.AsdfFile(aff_tree)\n", + " af.write_to(fh)\n", + " \n", + "af.close()\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "8159e63d-3c89-44ef-9a43-caaccd1abb4a", + "metadata": {}, + "outputs": [], + "source": [ + "# Set up a dictionary to define how the AMI3 pipeline should be configured\n", + "# Boilerplate dictionary setup\n", + "ami3dict = defaultdict(dict)\n", + "\n", + "# Options for ami_analyze step\n", + "ami3dict['ami_analyze']['firstfew'] = 5 # Analyze only the first 5 integrations\n", + "ami3dict['ami_analyze']['affine2d'] = asdf_affine2d # Use the affine distortion ASDF file we created\n", + "\n", + "\n", + "# Options for ami_normalize step\n", + "ami3dict['ami_normalize']['output_file'] = 'my_output_file' # this is overriden by the association name?\n", + "\n", + "# Overrides for whether or not certain steps should be skipped (example)\n", + "#ami3dict['ami_normalize']['skip'] = True\n", + "\n", + "# Overrides for various reference files\n", + "# Files should be in the base local directory or provide full path\n", + "#ami3dict['ami_analyze']['override_nrm'] = 'mynrmfile.fits' # NRM mask geometry file\n" + ] + }, + { + "cell_type": "markdown", + "id": "e10da872-21ae-46c2-a3b1-f9cdefae3b36", + "metadata": {}, + "source": [ + "Find and sort all of the input files, ensuring use of absolute paths" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "5f0b5b47-62e9-4593-abc4-7ec630492e96", + "metadata": {}, + "outputs": [], + "source": [ + "# Science Files need the calints.fits files\n", + "calints_files = sorted(glob.glob(os.path.join(image2_dir, 'jw*calints.fits')))\n", + "for ii in range(len(calints_files)):\n", + " calints_files[ii] = os.path.abspath(calints_files[ii])\n", + "calintsiles = np.array(calints_files)\n", + "\n", + "print(f'Found {str(len(calints_files))} science files to process')" + ] + }, + { + "cell_type": "markdown", + "id": "db5cb6f5-44a1-4f28-95dc-b23a97451465", + "metadata": {}, + "source": [ + "### Create Association File\n", + "\n", + "An association file lists the exposures to calibrated together in `Stage 3`\n", + "of the pipeline. Note that an association file is available for download\n", + "from MAST, with a filename of `*_asn.json`. Here we show how to create an\n", + "association file to point to the data products created when processing data\n", + "through the pipeline. Note that the output products will have a rootname\n", + "that is specified by the `product_name` in the association file. For\n", + "this tutorial, the rootname of the output products will be\n", + "`image3_association`." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "1b31f80f-8d7b-45ae-8537-15d0208637df", + "metadata": { + "scrolled": true + }, + "outputs": [], + "source": [ + "# Create a Level 3 Association\n", + "if doami3:\n", + " associations = asn_from_list.asn_from_list(calints_files,\n", + " rule=DMS_Level3_Base,\n", + " product_name='ami3_abdor_f480m_pos1')\n", + " \n", + " associations.data['asn_type'] = 'ami3'\n", + " program = datamodels.open(calints_files[0]).meta.observation.program_number\n", + " associations.data['program'] = program\n", + "\n", + " # Set the second observation as the psf reference star\n", + " associations.data['products'][0]['members'][1]['exptype'] = 'psf'\n", + " \n", + " # Format association as .json file\n", + " asn_filename, serialized = associations.dump(format=\"json\")\n", + "\n", + " # Write out association file\n", + " association_ami3 = os.path.join(sci_dir, asn_filename)\n", + " with open(association_ami3, \"w\") as fd:\n", + " fd.write(serialized)" + ] + }, + { + "cell_type": "markdown", + "id": "629cad5b-e5a7-473e-8583-ad12cda55fa4", + "metadata": {}, + "source": [ + "### Run AMI3 stage of the pipeline\n", + "\n", + "For the target and reference star exposures listed in the association file, the\n", + "`AMI3` stage of the pipeline will produce:\n", + "* `*ami-oi.fits` files (one for the target, one for the reference star) from the `AmiAnalyzeStep` containing interferometric observables\n", + "* `aminorm-oi.fits` file from the `AmiNormalizeStep` containing normalized interferometric observables" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e0cebfd8-a650-41a3-8ff0-a8fd43e079d6", + "metadata": { + "scrolled": true + }, + "outputs": [], + "source": [ + "# Run Stage 3\n", + "if doami3:\n", + " ami3_result = Ami3Pipeline.call(association_ami3, output_dir=ami3_dir, steps=ami3dict, save_results=True)\n", + "else:\n", + " print('Skipping AMI3 processing')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f67d2c65-4614-4592-aa67-daa2de985013", + "metadata": {}, + "outputs": [], + "source": [ + "# Print out the time benchmark\n", + "time1 = time.perf_counter()\n", + "print(f\"Runtime so far: {time1 - time0:0.0f} seconds\")\n", + "print(f\"Runtime for AMI3: {time1 - time_ami3:0.0f} seconds\")" + ] + }, + { + "cell_type": "markdown", + "id": "4bcb6c40-fbb8-4226-b620-64b0c02c8ceb", + "metadata": {}, + "source": [ + "### Verify which pipeline steps were run" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c754530a-d4e2-493b-bf8b-bf8d1ad08770", + "metadata": {}, + "outputs": [], + "source": [ + "if doami3:\n", + " # Identify *ami-oi.fits file and open as datamodel\n", + " ami_oi = glob.glob(os.path.join(ami3_dir, \"*_ami-oi.fits\"))[0]\n", + " with datamodels.open(ami_oi) as oi_f:\n", + " # Check which steps were run\n", + " for step, status in oi_f.meta.cal_step.instance.items():\n", + " print(f\"{step}: {status}\")\n" + ] + }, + { + "cell_type": "markdown", + "id": "f73c5280-69a5-4796-b78f-724078a94d2b", + "metadata": {}, + "source": [ + "Check which reference files were used to calibrate the dataset" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "0cfb0842-cc95-42d5-8852-593b85032c52", + "metadata": { + "scrolled": true + }, + "outputs": [], + "source": [ + "if doami3:\n", + " for key, val in oi_f.meta.ref_file.instance.items():\n", + " print(f\"{key}:\")\n", + " for param in oi_f.meta.ref_file.instance[key]:\n", + " # for subkey, subval in key[val].items():\n", + " print(f\"\\t{oi_f.meta.ref_file.instance[key][param]}\")\n", + " " + ] + }, + { + "cell_type": "markdown", + "id": "f624c896-6d2a-4578-a68c-de598a00e81a", + "metadata": {}, + "source": [ + "
" + ] + }, + { + "cell_type": "markdown", + "id": "578a54ca-95ad-4344-bdbf-9e62fc76df92", + "metadata": {}, + "source": [ + "## 8. (Alternative) Run AMI3 steps independently\n", + "\n", + "Additional output products are saved when the AmiAnalyze step of the AMI3 pipeline is run independently on the target and reference star exposures. Processing them independently also allows greater customization (e.g. use of different spectral models for fitting target/reference star). Here we will demonstrate calling the Ami_Analyze and Ami_Normalize steps individually, using different ASDF bandpass files for each." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a3bd9bd1-f94d-46ad-80f0-99d22e5fa785", + "metadata": {}, + "outputs": [], + "source": [ + "calints_files = sorted(glob.glob(os.path.join(image2_dir, 'jw*calints.fits')))\n", + "\n", + "print(f'Found {str(len(calints_files))} science files to process')" + ] + }, + { + "cell_type": "markdown", + "id": "853dc855-0269-4a51-a645-f01c23a1d5bb", + "metadata": {}, + "source": [ + "The importance of matching the spectral type of the model to that of the target is not fully understood yet. The default bandpass used to create the image-plane model is simply the relevant filter throughput curve (e.g. F480M, F430M, F380M, F277W) combined with a flat source spectrum as a placeholder. Here, we will show an example of creating and using separate ASDF files containing bandpasses made using spectral models for target and reference star." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "9f67a868-a881-40dc-aa3a-7690f952b1e9", + "metadata": {}, + "outputs": [], + "source": [ + "# Fetch the AMI filter throughput model\n", + "refs = crds.getreferences({\n", + " 'INSTRUME' : calints_f.meta.instrument.name,\n", + " 'FILTER' : calints_f.meta.instrument.filter,\n", + " 'DATE-OBS' : calints_f.meta.observation.date,\n", + " 'TIME-OBS' : calints_f.meta.observation.time}, \n", + " reftypes=['throughput'])\n", + "\n", + "throughput_file = refs['throughput']\n", + "throughput_model = datamodels.open(throughput_file)\n", + "filt_spec = utils.get_filt_spec(throughput_model)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "901accd5-5d32-44bd-9033-9decbee66964", + "metadata": {}, + "outputs": [], + "source": [ + "# Target bandpass\n", + "\n", + "# Primary star of the target AB Dor A (HD 36705) is a K-type star (K0 III - K2 V)\n", + "# (https://ui.adsabs.harvard.edu/abs/1987ApJ...320..850V/abstract)\n", + "t_eff = 5143 # Effective temperature\n", + "fe_h = 0.18 # metallicity\n", + "log_g = 4.00 # log surface gravity\n", + "spec_abdor = stsyn.grid_to_spec('phoenix', t_eff, fe_h, log_g) \n", + "\n", + "# Combine the filter and source spectra\n", + "# This function also normalizes the spectrum, \n", + "# trims to where throughput > 0.1, \n", + "# and bins into `nlambda` wavelength bins.\n", + "# The returned array has shape (nlambda, 2)\n", + "nlamb = 19\n", + "bandpass = utils.combine_src_filt(filt_spec,\n", + " spec_abdor,\n", + " trim=0.01,\n", + " nlambda=nlamb)\n", + "\n", + "# Create an ASDF file\n", + "asdf_bandpass_abdor = os.path.join(ami3_dir_alt,'bandpass_f480m_abdor.asdf')\n", + "\n", + "bp_tree = {\"bandpass\": bandpass}\n", + "\n", + "with open(asdf_bandpass_abdor, 'wb') as fh:\n", + " af = asdf.AsdfFile(bp_tree)\n", + " af.write_to(fh)\n", + " \n", + "af.close()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c8bdf558-8ac0-4b1c-9e23-c88f4fc53b80", + "metadata": {}, + "outputs": [], + "source": [ + "# Reference star bandpass\n", + "\n", + "# Reference star HD 37093 is also K type (K5 III) but doesn't have measured stellar characteristics available\n", + "# So we will use the model quantities for that spectral type\n", + "spec_refstar = stsyn.grid_to_spec('phoenix', 4000, 0.0, 1.5)\n", + "\n", + "# Combine the filter and source spectra\n", + "# This function also normalizes the spectrum, \n", + "# trims to where throughput > 0.1, \n", + "# and bins into `nlambda` wavelength bins.\n", + "# The retruned array has shape (nlambda, 2)\n", + "nlamb = 19\n", + "bandpass = utils.combine_src_filt(filt_spec,\n", + " spec_refstar,\n", + " trim=0.01,\n", + " nlambda=nlamb)\n", + "\n", + "# Create an ASDF file\n", + "asdf_bandpass_refstar = os.path.join(ami3_dir_alt,'bandpass_f480m_hd37093.asdf')\n", + "\n", + "bp_tree = {\"bandpass\": bandpass}\n", + "\n", + "with open(asdf_bandpass_refstar, 'wb') as fh:\n", + " af = asdf.AsdfFile(bp_tree)\n", + " af.write_to(fh)\n", + " \n", + "af.close()\n", + "throughput_model.close()" + ] + }, + { + "cell_type": "markdown", + "id": "b05649aa-15dc-4599-b255-777821136e07", + "metadata": {}, + "source": [ + "Call the AmiAnalyzeStep on the target and calibrator *calints.fits files, using the corresponding bandpass file for each." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "b7427e4b-e7d4-4c45-b84a-aa222aa2f23c", + "metadata": { + "scrolled": true + }, + "outputs": [], + "source": [ + "if doami3_independent:\n", + " for (calints, asdf_bandpass) in zip(calints_files, [asdf_bandpass_abdor, asdf_bandpass_refstar]):\n", + " # Call the AmiAnalyze step on each file\n", + " oimodel, oimodelmulti, lgmodel = AmiAnalyzeStep.call(calints,\n", + " save_results=True,\n", + " firstfew=5,\n", + " bandpass=asdf_bandpass,\n", + " output_dir=ami3_dir_alt)\n", + " \n", + " \n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "426ee34a-eb3b-4953-bc89-cf0c18ea4fed", + "metadata": {}, + "outputs": [], + "source": [ + "if doami3_independent:\n", + " # Collect the target and reference star *ami-oi.fits files\n", + " oifits = sorted(glob.glob(os.path.join(ami3_dir_alt, '*ami-oi.fits')))\n", + " # Call the AmiNormalize step on the target, ref star\n", + " # Arguments can be AmiOIModels or filenames\n", + " AmiNormalizeStep.call(oifits[0],oifits[1], output_dir=ami3_dir_alt, save_results=True)" + ] + }, + { + "cell_type": "markdown", + "id": "80362ba8", + "metadata": {}, + "source": [ + "
" + ] + }, + { + "cell_type": "markdown", + "id": "9569d0b2-5682-4f48-aefa-95f091c8f75b", + "metadata": {}, + "source": [ + "## 9. Visualize the results" + ] + }, + { + "cell_type": "markdown", + "id": "caa7afe2-6a75-40c4-8307-e72166aa3baf", + "metadata": {}, + "source": [ + "### Plot interferometric observables\n", + "\n", + "We will now plot the interferometric observables for the target, reference star, and normalized target. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "bb71d70b-4c89-4975-ac47-ba9162a7b8da", + "metadata": {}, + "outputs": [], + "source": [ + "# Define a function for plotting observables\n", + "\n", + "def plot_observables(ami_oimodel):\n", + " vis2 = ami_oimodel.vis2[\"VIS2DATA\"]\n", + " vis2_err = ami_oimodel.vis2[\"VIS2ERR\"]\n", + " t3phi = ami_oimodel.t3[\"T3PHI\"]\n", + " t3phi_err = ami_oimodel.t3[\"T3PHIERR\"]\n", + "\n", + " baselines = (ami_oimodel.vis2['UCOORD']**2 + ami_oimodel.vis2['VCOORD']**2)**0.5\n", + "\n", + " u1 = ami_oimodel.t3['U1COORD']\n", + " u2 = ami_oimodel.t3['U2COORD']\n", + " v1 = ami_oimodel.t3['V1COORD']\n", + " v2 = ami_oimodel.t3['V2COORD']\n", + " u3 = -(u1+u2)\n", + " v3 = -(v1+v2)\n", + " baselines_t3 = []\n", + " for k in range(len(u1)):\n", + " B1 = np.sqrt(u1[k]**2+v1[k]**2)\n", + " B2 = np.sqrt(u2[k]**2+v2[k]**2)\n", + " B3 = np.sqrt(u3[k]**2+v3[k]**2)\n", + " baselines_t3.append(np.max([B1, B2, B3])) # use longest baseline of the three\n", + " baselines_t3 = np.array(baselines_t3)\n", + "\n", + " fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 5))\n", + " ax1.errorbar(baselines_t3, t3phi, yerr=t3phi_err, fmt=\"go\")\n", + " ax2.errorbar(baselines, vis2, yerr=vis2_err, fmt=\"go\")\n", + " ax1.set_xlabel(r\"$B_{max}$ [m]\", size=12)\n", + " ax1.set_ylabel(\"Closure phase [deg]\", size=12)\n", + " ax1.set_title(\"Closure Phase\", size=14)\n", + " ax2.set_title(\"Squared Visibility\", size=14)\n", + " ax2.set_xlabel(r\"$B_{max}$ [m]\", size=12)\n", + " ax2.set_ylabel(\"Squared Visibility\", size=12)\n", + " plt.suptitle(ami_oimodel.meta.filename, fontsize=16)\n", + " ax1.set_ylim([-3.5, 3.5]) # closure phase y limits\n", + " ax2.set_ylim([0.5, 1.1]) # sqv y limits\n" + ] + }, + { + "cell_type": "markdown", + "id": "f8f7254b-b679-478d-91ed-56cb52981dfd", + "metadata": {}, + "source": [ + "First we'll look at the target and calibrator's squared visibilities and closure phases. We plot the squared visibilities against their corresponding baselines (units of meters projected back to the primary meter). We plot the closure phases against the longest of the three baselines that forms the \"closure triangle\" whose phases were summed to calculate the closure phase." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "13a834cc-105f-4006-8aea-7b2fbb292467", + "metadata": {}, + "outputs": [], + "source": [ + "if doviz:\n", + " # Get the target and calibrator ami-oi.fits files\n", + " oifitslist = sorted(glob.glob(os.path.join(ami3_dir,\"*ami-oi.fits\")))\n", + "\n", + " # Open them as datamodels\n", + " amioi_targ = datamodels.open(oifitslist[0])\n", + " amioi_ref = datamodels.open(oifitslist[1])\n", + "\n", + " # Plot the observables\n", + " plot_observables(amioi_targ)\n", + " plot_observables(amioi_ref)" + ] + }, + { + "cell_type": "markdown", + "id": "424dcd6b-ae89-44ba-9c71-cd6fe1c35f82", + "metadata": {}, + "source": [ + "Now we will plot the final calibrated data product; the target normalized by the reference star. The reference star closure phases are subtracted from the target closure phases, and the target squared visibilities are divided by the reference star squared visibilities. These interferometric observables can then be analyzed to recover information about the observed scene." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c6f1ed63-0bed-412b-ab04-54caa7d61bbd", + "metadata": {}, + "outputs": [], + "source": [ + "if doviz:\n", + " # Identify calibrated *_aminorm-oi.fits file and open as datamodel\n", + " abdor_oifits = glob.glob(os.path.join(ami3_dir, \"*_aminorm-oi.fits\"))[0]\n", + " amioi_norm = datamodels.open(abdor_oifits)\n", + "\n", + " # Plot the observables\n", + " plot_observables(amioi_norm)" + ] + }, + { + "cell_type": "markdown", + "id": "a1727d83-646c-4ad0-a6da-0a4dfeedec7f", + "metadata": {}, + "source": [ + "### Display the best-fit model\n", + "\n", + "We can also look at the cleaned data, model, and residual images that are saved in the auxiliary `*amilg.fits` data products. Each image has been normalized by the peak pixel value of the data, and the data and model are displayed on a square root stretch to emphasize the fainter features." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4705d1a1-123f-443e-b086-6ad473811315", + "metadata": {}, + "outputs": [], + "source": [ + "if doviz:\n", + " # Find the data files\n", + " amilg = sorted(glob.glob(os.path.join(ami3_dir_alt, '*amilg.fits')))\n", + "\n", + " # Open the first one as an AmiLGModel\n", + " amilgmodel = datamodels.open(amilg[0])\n", + "\n", + " # Plot the data, model, residual\n", + " norm = ImageNormalize(amilgmodel.norm_centered_image[0], \n", + " interval=MinMaxInterval(), \n", + " stretch=SqrtStretch())\n", + " fig, axs = plt.subplots(1,3, figsize=(12,5))\n", + " axs[0].set_title('Normalized Data')\n", + " im1 = axs[0].imshow(amilgmodel.norm_centered_image[0], norm=norm)\n", + " axs[1].set_title('Normalized Model')\n", + " im2 = axs[1].imshow(amilgmodel.norm_fit_image[0], norm=norm)\n", + " axs[2].set_title('Normalized Residual (Data-Model)')\n", + " im3 = axs[2].imshow(amilgmodel.norm_resid_image[0])\n", + " \n", + " for im in [im1,im2,im3]:\n", + " plt.colorbar(im, shrink=.95,location='bottom', pad=.05)\n", + " for ax in axs:\n", + " ax.axis('off')\n", + " plt.tight_layout()" + ] + }, + { + "cell_type": "markdown", + "id": "89ee8944", + "metadata": {}, + "source": [ + "
" + ] + }, + { + "cell_type": "markdown", + "id": "6303c0e7-13af-4e4c-bed7-8b85cc8e3645", + "metadata": {}, + "source": [ + "\"stsci_logo\" " + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.12.4" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} From 12ad6107cd848fe19802d58312d4ee9d2e703d95 Mon Sep 17 00:00:00 2001 From: Rachel Cooper Date: Fri, 7 Mar 2025 18:56:08 -0500 Subject: [PATCH 2/6] Minor notebook improvements, added requirements.txt file --- .../NIRISS/AMI/JWPipeNB-niriss-ami.ipynb | 147 +++++++++++------- notebooks/NIRISS/AMI/requirements.txt | 8 + 2 files changed, 101 insertions(+), 54 deletions(-) create mode 100644 notebooks/NIRISS/AMI/requirements.txt diff --git a/notebooks/NIRISS/AMI/JWPipeNB-niriss-ami.ipynb b/notebooks/NIRISS/AMI/JWPipeNB-niriss-ami.ipynb index 8a0a0f5..7e2a1ae 100755 --- a/notebooks/NIRISS/AMI/JWPipeNB-niriss-ami.ipynb +++ b/notebooks/NIRISS/AMI/JWPipeNB-niriss-ami.ipynb @@ -204,10 +204,10 @@ "# produced from the prior stage.\n", "\n", "# Science processing\n", - "dodet1 = False # calwebb_detector1\n", - "doimage2 = False # calwebb_image2\n", - "doami3 = False # calwebb_ami3\n", - "doami3_independent = False # Run ami_analyze, ami_normalize separately\n", + "dodet1 = True # calwebb_detector1\n", + "doimage2 = True # calwebb_image2\n", + "doami3 = True # calwebb_ami3\n", + "doami3_independent = True # Run ami_analyze, ami_normalize separately\n", "doviz = True # Visualize calwebb_ami3 output" ] }, @@ -237,7 +237,7 @@ "\n", "# Set CRDS context (if overriding to use a specific version of reference\n", "# files; leave commented out to use latest reference files by default)\n", - "#%env CRDS_CONTEXT jwst_1254.pmap\n", + "# os.environ['CRDS_CONTEXT'] = 'jwst_1338.pmap'\n", "\n", "# Check whether the local CRDS cache directory has been set.\n", "# If not, set it to the user home directory\n", @@ -455,6 +455,7 @@ " sci_files.extend(filtered_products['dataURI'])\n", " \n", " # Select only the exposures we want to use based on filename\n", + " # Obs 12 is the target, Obs 15 is the reference star\n", " sci_observtn = ['012','015']\n", " visit = '001'\n", " visitgroup = '03'\n", @@ -561,6 +562,7 @@ "uncal_files = sorted(glob.glob(os.path.join(uncal_dir, '*_uncal.fits')))\n", "\n", "# Open file as JWST datamodels\n", + "# Obs 12 is the target, Obs 15 is the reference star so they are (targ, ref) order when list is sorted\n", "uncal_targ = datamodels.open(uncal_files[0])\n", "uncal_ref = datamodels.open(uncal_files[1])\n", "\n", @@ -632,14 +634,6 @@ "plt.tight_layout()" ] }, - { - "cell_type": "code", - "execution_count": null, - "id": "4c2a4925-e5bf-4d48-8843-68960f7bae9e", - "metadata": {}, - "outputs": [], - "source": [] - }, { "cell_type": "code", "execution_count": null, @@ -674,7 +668,9 @@ "and contain the fitted ramp slopes for each integration.\n", "2D countrate data products (`*_rate.fits`) are also\n", "created (nrows x ncols) which have been averaged over all\n", - "integrations. the fitted ramp slopes\n", + "integrations.\n", + "\n", + "the fitted ramp slopes\n", "for each integration.\n", "\n", "By default, all steps in the Detector1 stage of the pipeline are run for\n", @@ -740,9 +736,7 @@ "# Files should be in the base local directory or provide full path\n", "#det1dict['dq_init']['override_mask'] = 'myfile.fits' # Bad pixel mask\n", "#det1dict['saturation']['override_saturation'] = 'myfile.fits' # Saturation\n", - "#det1dict['reset']['override_reset'] = 'myfile.fits' # Reset\n", "#det1dict['linearity']['override_linearity'] = 'myfile.fits' # Linearity\n", - "#det1dict['rscd']['override_rscd'] = 'myfile.fits' # RSCD\n", "#det1dict['dark_current']['override_dark'] = 'myfile.fits' # Dark current subtraction\n", "#det1dict['jump']['override_gain'] = 'myfile.fits' # Gain used by jump step\n", "#det1dict['ramp_fit']['override_gain'] = 'myfile.fits' # Gain used by ramp fitting step\n", @@ -814,11 +808,11 @@ "if dodet1:\n", " # find rateints files\n", " rateints_files = sorted(glob.glob(os.path.join(det1_dir, '*_rateints.fits')))\n", - "\n", + " \n", " # Read in file as datamodel\n", " rateints_targ = datamodels.open(rateints_files[0])\n", " rateints_ref = datamodels.open(rateints_files[1])\n", - "\n", + " \n", " # Check which steps were run\n", " for step, status in rateints_targ.meta.cal_step.instance.items():\n", " print(f\"{step}: {status}\")\n" @@ -843,7 +837,6 @@ " for key, val in rateints_targ.meta.ref_file.instance.items():\n", " print(f\"{key}:\")\n", " for param in rateints_targ.meta.ref_file.instance[key]:\n", - " # for subkey, subval in key[val].items():\n", " print(f\"\\t{rateints_targ.meta.ref_file.instance[key][param]}\")" ] }, @@ -1051,7 +1044,6 @@ " for key, val in calints_targ.meta.ref_file.instance.items():\n", " print(f\"{key}:\")\n", " for param in calints_targ.meta.ref_file.instance[key]:\n", - " # for subkey, subval in key[val].items():\n", " print(f\"\\t{calints_targ.meta.ref_file.instance[key][param]}\")" ] }, @@ -1200,13 +1192,10 @@ "ami3dict = defaultdict(dict)\n", "\n", "# Options for ami_analyze step\n", - "ami3dict['ami_analyze']['firstfew'] = 5 # Analyze only the first 5 integrations\n", + "# ami3dict['ami_analyze']['firstfew'] = 5 # Analyze only the first 5 integrations to speed up demo\n", + "# ami3dict['ami_analyze']['affine2d'] = 11 # Increase oversampling of image plane fit (increases runtime)\n", "ami3dict['ami_analyze']['affine2d'] = asdf_affine2d # Use the affine distortion ASDF file we created\n", "\n", - "\n", - "# Options for ami_normalize step\n", - "ami3dict['ami_normalize']['output_file'] = 'my_output_file' # this is overriden by the association name?\n", - "\n", "# Overrides for whether or not certain steps should be skipped (example)\n", "#ami3dict['ami_normalize']['skip'] = True\n", "\n", @@ -1230,11 +1219,9 @@ "metadata": {}, "outputs": [], "source": [ - "# Science Files need the calints.fits files\n", + "# AMI3 takes the calints.fits files\n", "calints_files = sorted(glob.glob(os.path.join(image2_dir, 'jw*calints.fits')))\n", - "for ii in range(len(calints_files)):\n", - " calints_files[ii] = os.path.abspath(calints_files[ii])\n", - "calintsiles = np.array(calints_files)\n", + "calints_files = [os.path.abspath(calints) for calints in calints_files]\n", "\n", "print(f'Found {str(len(calints_files))} science files to process')" ] @@ -1248,12 +1235,10 @@ "\n", "An association file lists the exposures to calibrated together in `Stage 3`\n", "of the pipeline. Note that an association file is available for download\n", - "from MAST, with a filename of `*_asn.json`. Here we show how to create an\n", - "association file to point to the data products created when processing data\n", - "through the pipeline. Note that the output products will have a rootname\n", - "that is specified by the `product_name` in the association file. For\n", - "this tutorial, the rootname of the output products will be\n", - "`image3_association`." + "from MAST, with a filename of `*_asn.json`, though it may require additional manipulation for AMI.\n", + "Here we show how to create an association file to point to the data products created when processing data\n", + "through the pipeline. Note that the final output products will have a rootname that is specified by the `product_name`\n", + "in the association file. For this tutorial, the rootname of the final output product will be `ami3_abdor_f480m_pos1`." ] }, { @@ -1275,8 +1260,14 @@ " program = datamodels.open(calints_files[0]).meta.observation.program_number\n", " associations.data['program'] = program\n", "\n", + " # We can confirm which file is the reference star by the 'psf_reference' datamodel attribute\n", + " for num, calints in enumerate(calints_files):\n", + " calints_f = datamodels.open(calints)\n", + " if calints_f.meta.exposure.psf_reference is True:\n", + " which_psf = num # index in the list of files\n", + "\n", " # Set the second observation as the psf reference star\n", - " associations.data['products'][0]['members'][1]['exptype'] = 'psf'\n", + " associations.data['products'][0]['members'][which_psf]['exptype'] = 'psf'\n", " \n", " # Format association as .json file\n", " asn_filename, serialized = associations.dump(format=\"json\")\n", @@ -1296,8 +1287,10 @@ "\n", "For the target and reference star exposures listed in the association file, the\n", "`AMI3` stage of the pipeline will produce:\n", - "* `*ami-oi.fits` files (one for the target, one for the reference star) from the `AmiAnalyzeStep` containing interferometric observables\n", - "* `aminorm-oi.fits` file from the `AmiNormalizeStep` containing normalized interferometric observables" + "* `*ami-oi.fits` files (one for the target, one for the reference star) from the `AmiAnalyzeStep` containing averaged interferometric observables\n", + "* `*aminorm-oi.fits` file from the `AmiNormalizeStep` containing normalized interferometric observables\n", + "\n", + "The `*ami-oi.fits` and `*aminorm-oi.fits` files adhere to the [OIFITS2](https://doi.org/10.1051/0004-6361/201526405) format, which is a registered FITS standard for optical and infrared interferometry data." ] }, { @@ -1374,9 +1367,7 @@ " for key, val in oi_f.meta.ref_file.instance.items():\n", " print(f\"{key}:\")\n", " for param in oi_f.meta.ref_file.instance[key]:\n", - " # for subkey, subval in key[val].items():\n", - " print(f\"\\t{oi_f.meta.ref_file.instance[key][param]}\")\n", - " " + " print(f\"\\t{oi_f.meta.ref_file.instance[key][param]}\")" ] }, { @@ -1394,7 +1385,7 @@ "source": [ "## 8. (Alternative) Run AMI3 steps independently\n", "\n", - "Additional output products are saved when the AmiAnalyze step of the AMI3 pipeline is run independently on the target and reference star exposures. Processing them independently also allows greater customization (e.g. use of different spectral models for fitting target/reference star). Here we will demonstrate calling the Ami_Analyze and Ami_Normalize steps individually, using different ASDF bandpass files for each." + "Additional output products are saved when the AmiAnalyze step of the AMI3 pipeline is run independently on the target and reference star exposures. Processing them independently also allows greater customization (e.g. use of different spectral models for fitting target/reference star). Here we will demonstrate calling the Ami_Analyze and Ami_Normalize steps individually, using different ASDF bandpass files for each.\n" ] }, { @@ -1414,7 +1405,9 @@ "id": "853dc855-0269-4a51-a645-f01c23a1d5bb", "metadata": {}, "source": [ - "The importance of matching the spectral type of the model to that of the target is not fully understood yet. The default bandpass used to create the image-plane model is simply the relevant filter throughput curve (e.g. F480M, F430M, F380M, F277W) combined with a flat source spectrum as a placeholder. Here, we will show an example of creating and using separate ASDF files containing bandpasses made using spectral models for target and reference star." + "### Create bandpass ASDF files\n", + "\n", + "The importance of matching the spectral type of the model to that of the target is yet to be quantified. The default bandpass used to create the image-plane model is simply the relevant filter throughput curve (e.g. F480M, F430M, F380M, F277W) combined with a flat source spectrum as a placeholder. Here, we will show an example of creating and using separate ASDF files containing bandpasses made using spectral models for target and reference star." ] }, { @@ -1518,7 +1511,13 @@ "id": "b05649aa-15dc-4599-b255-777821136e07", "metadata": {}, "source": [ - "Call the AmiAnalyzeStep on the target and calibrator *calints.fits files, using the corresponding bandpass file for each." + "### Call the AmiAnalyzeStep\n", + "Run on the target and calibrator `*calints.fits` files, using the corresponding bandpass file for each.\n", + "\n", + "When called individually rather than as part of the `calwebb_ami3 pipeline`, the `AmiAnalyzeStep` saves two additional types of data products. For each input file, the step produces:\n", + "* `*ami-oi.fits` file containing of observables averaged over all integrations of the input file. The error is taken to be the standard error of the mean, where the variance is the covariance between amplitudes and phases (e.g. fringe amplitudes and fringe phases, closure phases and triple-product amplitudes).\n", + "* `*amimulti-oi.fits` file containing tables of observables for each integration. Files do not contain error estimates.\n", + "* `*amilg.fits` file containing cropped and cleaned image plane data, model, and residuals, and parameters of the best-fit LG model.\n" ] }, { @@ -1532,14 +1531,23 @@ "source": [ "if doami3_independent:\n", " for (calints, asdf_bandpass) in zip(calints_files, [asdf_bandpass_abdor, asdf_bandpass_refstar]):\n", - " # Call the AmiAnalyze step on each file\n", + " # Call the AmiAnalyze step on each file, bandpass\n", " oimodel, oimodelmulti, lgmodel = AmiAnalyzeStep.call(calints,\n", " save_results=True,\n", - " firstfew=5,\n", + " # firstfew=5, # Uncomment to speed up processing\n", " bandpass=asdf_bandpass,\n", - " output_dir=ami3_dir_alt)\n", - " \n", - " \n" + " output_dir=ami3_dir_alt)" + ] + }, + { + "cell_type": "markdown", + "id": "015873e2-ddff-4695-8efa-b8093ec0905b", + "metadata": {}, + "source": [ + "### Call the AmiNormalizeStep\n", + "\n", + "This step takes one target and one PSF reference star *ami-oi.fits file as input. It will produce:\n", + "* `aminorm-oi.fits` file containing tables of normalized observables" ] }, { @@ -1557,6 +1565,30 @@ " AmiNormalizeStep.call(oifits[0],oifits[1], output_dir=ami3_dir_alt, save_results=True)" ] }, + { + "cell_type": "markdown", + "id": "8ce97930-4dba-4cfb-848e-57f632e509d9", + "metadata": {}, + "source": [ + "### Verify which pipeline steps were run" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "aecd376e-9d17-42db-acd3-5b7d2a842294", + "metadata": {}, + "outputs": [], + "source": [ + "if doami3_independent:\n", + " # Collect the final aminorm-oi.fits file \n", + " aminorm_oi = glob.glob(os.path.join(ami3_dir_alt, '*aminorm-oi.fits'))[0]\n", + " with datamodels.open(aminorm_oi) as oi_f:\n", + " # Check which steps were run\n", + " for step, status in oi_f.meta.cal_step.instance.items():\n", + " print(f\"{step}: {status}\")" + ] + }, { "cell_type": "markdown", "id": "80362ba8", @@ -1593,13 +1625,18 @@ "# Define a function for plotting observables\n", "\n", "def plot_observables(ami_oimodel):\n", + " # Read the observables from the datamodel\n", + " # Squared visibilities and uncertainties\n", " vis2 = ami_oimodel.vis2[\"VIS2DATA\"]\n", " vis2_err = ami_oimodel.vis2[\"VIS2ERR\"]\n", + " # Closure phases and uncertainties\n", " t3phi = ami_oimodel.t3[\"T3PHI\"]\n", " t3phi_err = ami_oimodel.t3[\"T3PHIERR\"]\n", "\n", + " # Construct baselines between the U and V coordinates of sub-apertures\n", " baselines = (ami_oimodel.vis2['UCOORD']**2 + ami_oimodel.vis2['VCOORD']**2)**0.5\n", "\n", + " # Construct baselines between combinations of three sub-apertures\n", " u1 = ami_oimodel.t3['U1COORD']\n", " u2 = ami_oimodel.t3['U2COORD']\n", " v1 = ami_oimodel.t3['V1COORD']\n", @@ -1611,9 +1648,11 @@ " B1 = np.sqrt(u1[k]**2+v1[k]**2)\n", " B2 = np.sqrt(u2[k]**2+v2[k]**2)\n", " B3 = np.sqrt(u3[k]**2+v3[k]**2)\n", - " baselines_t3.append(np.max([B1, B2, B3])) # use longest baseline of the three\n", + " # Use longest baseline of the three for plotting\n", + " baselines_t3.append(np.max([B1, B2, B3])) \n", " baselines_t3 = np.array(baselines_t3)\n", - "\n", + " \n", + " # Plot closure phases, squared visibilities against their baselines \n", " fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 5))\n", " ax1.errorbar(baselines_t3, t3phi, yerr=t3phi_err, fmt=\"go\")\n", " ax2.errorbar(baselines, vis2, yerr=vis2_err, fmt=\"go\")\n", @@ -1624,8 +1663,8 @@ " ax2.set_xlabel(r\"$B_{max}$ [m]\", size=12)\n", " ax2.set_ylabel(\"Squared Visibility\", size=12)\n", " plt.suptitle(ami_oimodel.meta.filename, fontsize=16)\n", - " ax1.set_ylim([-3.5, 3.5]) # closure phase y limits\n", - " ax2.set_ylim([0.5, 1.1]) # sqv y limits\n" + " ax1.set_ylim([-3.5, 3.5])\n", + " ax2.set_ylim([0.5, 1.1]) " ] }, { @@ -1661,7 +1700,7 @@ "id": "424dcd6b-ae89-44ba-9c71-cd6fe1c35f82", "metadata": {}, "source": [ - "Now we will plot the final calibrated data product; the target normalized by the reference star. The reference star closure phases are subtracted from the target closure phases, and the target squared visibilities are divided by the reference star squared visibilities. These interferometric observables can then be analyzed to recover information about the observed scene." + "Now we will plot the final calibrated data product; the target normalized by the reference star. The reference star closure phases are subtracted from the target closure phases, and the target squared visibilities are divided by the reference star squared visibilities. These interferometric observables can then be analyzed to recover information about the observed scene. (reference specific packages?)" ] }, { diff --git a/notebooks/NIRISS/AMI/requirements.txt b/notebooks/NIRISS/AMI/requirements.txt new file mode 100644 index 0000000..f58f0cc --- /dev/null +++ b/notebooks/NIRISS/AMI/requirements.txt @@ -0,0 +1,8 @@ +asdf +astropy>=4.3.1 +astroquery +crds +jwst>=1.17.0 +numpy<2.0 +stsynphot +synphot From 3230fe510530cc12b527af4876b6cb0f61892c41 Mon Sep 17 00:00:00 2001 From: Rachel Cooper Date: Mon, 10 Mar 2025 12:01:36 -0400 Subject: [PATCH 3/6] pep8 fixes --- .../NIRISS/AMI/JWPipeNB-niriss-ami.ipynb | 140 +++++++++--------- 1 file changed, 66 insertions(+), 74 deletions(-) diff --git a/notebooks/NIRISS/AMI/JWPipeNB-niriss-ami.ipynb b/notebooks/NIRISS/AMI/JWPipeNB-niriss-ami.ipynb index 7e2a1ae..b38e330 100755 --- a/notebooks/NIRISS/AMI/JWPipeNB-niriss-ami.ipynb +++ b/notebooks/NIRISS/AMI/JWPipeNB-niriss-ami.ipynb @@ -309,7 +309,6 @@ "\n", "# For optional input argument overrides\n", "import asdf\n", - "from synphot import SourceSpectrum\n", "import stsynphot as stsyn\n", "\n", "# for JWST calibration pipeline\n", @@ -422,8 +421,7 @@ " sci_obs_id_table = Observations.query_criteria(instrument_name=[\"NIRISS/AMI\"],\n", " proposal_id=[program],\n", " filters=['F480M;NRM'], # Data for Specific Filter\n", - " obs_id=['jw' + program + '*']\n", - " )" + " obs_id=['jw' + program + '*'])" ] }, { @@ -456,21 +454,18 @@ " \n", " # Select only the exposures we want to use based on filename\n", " # Obs 12 is the target, Obs 15 is the reference star\n", - " sci_observtn = ['012','015']\n", + " sci_observtn = ['012', '015']\n", " visit = '001'\n", " visitgroup = '03'\n", " seq_id = \"1\"\n", " act_id = '02'\n", " expnum = '00001'\n", "\n", + " # Construct the filenames and select files based on them\n", " filestrings = ['jw'+program+sciobs+visit+'_'+visitgroup+seq_id+act_id+'_'+expnum for sciobs in sci_observtn]\n", - " #sci_files_to_download = []\n", - " \n", - "\n", " sci_files_to_download = [scifile for scifile in sci_files if any(filestr in scifile for filestr in filestrings)]\n", - " \n", " sci_files_to_download = sorted(set(sci_files_to_download))\n", - " print(f\"Science files selected for downloading: {len(sci_files_to_download)}\")\n" + " print(f\"Science files selected for downloading: {len(sci_files_to_download)}\")" ] }, { @@ -569,7 +564,6 @@ "for uncal_f in [uncal_targ, uncal_ref]:\n", " # print file name\n", " print(uncal_f.meta.filename)\n", - " \n", " # Print out exposure info\n", " print(f\"Instrument: {uncal_f.meta.instrument.name}\")\n", " print(f\"Filter: {uncal_f.meta.instrument.filter}\")\n", @@ -613,21 +607,21 @@ "source": [ "# Image normalization (square root stretch)\n", "norm1 = ImageNormalize(uncal_targ.data[0][-1], \n", - " interval=MinMaxInterval(), \n", - " stretch=SqrtStretch())\n", + " interval=MinMaxInterval(), \n", + " stretch=SqrtStretch())\n", "norm2 = ImageNormalize(uncal_ref.data[0][-1], \n", - " interval=MinMaxInterval(), \n", - " stretch=SqrtStretch())\n", + " interval=MinMaxInterval(), \n", + " stretch=SqrtStretch())\n", "\n", "# Display the images\n", - "fig, axs = plt.subplots(1,2, figsize=(9,5))\n", + "fig, axs = plt.subplots(1, 2, figsize=(9, 5))\n", "axs[0].set_title('Target Uncal Data')\n", "im1 = axs[0].imshow(uncal_targ.data[0][-1], norm=norm1)\n", "axs[1].set_title('Ref Star Uncal Data')\n", "im2 = axs[1].imshow(uncal_ref.data[0][-1], norm=norm2)\n", "\n", - "for im in [im1,im2]:\n", - " cbar = plt.colorbar(im, shrink=.9,location='bottom', pad=.05)\n", + "for im in [im1, im2]:\n", + " cbar = plt.colorbar(im, shrink=.9, location='bottom', pad=.05)\n", " cbar.ax.tick_params(axis='x', rotation=45)\n", "for ax in axs:\n", " ax.axis('off')\n", @@ -722,9 +716,9 @@ "det1dict = defaultdict(dict)\n", "\n", "# Step names are copied here for reference\n", - "det1_steps = ['group_scale','dq_init','saturation','ipc','superbias','refpix',\n", - " 'linearity','persistence','dark_current','charge_migration',\n", - " 'jump','ramp_fit','gain_scale']\n", + "det1_steps = ['group_scale', 'dq_init', 'saturation', 'ipc', 'superbias', 'refpix',\n", + " 'linearity', 'persistence', 'dark_current', 'charge_migration',\n", + " 'jump', 'ramp_fit', 'gain_scale']\n", "\n", "# Overrides for whether or not certain steps should be skipped\n", "# skipping the ipc, persistence, and dark steps\n", @@ -815,7 +809,7 @@ " \n", " # Check which steps were run\n", " for step, status in rateints_targ.meta.cal_step.instance.items():\n", - " print(f\"{step}: {status}\")\n" + " print(f\"{step}: {status}\")" ] }, { @@ -857,19 +851,19 @@ "source": [ "if dodet1:\n", " norm1 = ImageNormalize(rateints_targ.data[0], \n", - " interval=MinMaxInterval(), \n", - " stretch=SqrtStretch())\n", + " interval=MinMaxInterval(), \n", + " stretch=SqrtStretch())\n", " norm2 = ImageNormalize(rateints_ref.data[0], \n", - " interval=MinMaxInterval(), \n", - " stretch=SqrtStretch())\n", - " fig, axs = plt.subplots(1,2, figsize=(9,5))\n", + " interval=MinMaxInterval(), \n", + " stretch=SqrtStretch())\n", + " fig, axs = plt.subplots(1, 2, figsize=(9, 5))\n", " axs[0].set_title('Target Rateints Data')\n", " im1 = axs[0].imshow(rateints_targ.data[0], norm=norm1)\n", " axs[1].set_title('Ref Star Rateints Data')\n", " im2 = axs[1].imshow(rateints_ref.data[0], norm=norm2)\n", " \n", - " for im in [im1,im2]:\n", - " cbar = plt.colorbar(im, shrink=.9,location='bottom', pad=.05)\n", + " for im in [im1, im2]:\n", + " cbar = plt.colorbar(im, shrink=.9, location='bottom', pad=.05)\n", " cbar.ax.tick_params(axis='x', rotation=45)\n", " for ax in axs:\n", " ax.axis('off')\n", @@ -929,7 +923,7 @@ "# Boilerplate dictionary setup\n", "image2dict = defaultdict(dict)\n", "\n", - "image2steps = ['assign_wcs','flat_field','photom','resample']\n", + "image2steps = ['assign_wcs', 'flat_field', 'photom', 'resample']\n", "\n", "# Overrides for whether or not certain steps should be skipped (example)\n", "image2dict['photom']['skip'] = True\n", @@ -1065,20 +1059,20 @@ "if doimage2:\n", " # Image normalization (square root stretch)\n", " norm1 = ImageNormalize(calints_targ.data[0], \n", - " interval=MinMaxInterval(), \n", - " stretch=SqrtStretch())\n", + " interval=MinMaxInterval(), \n", + " stretch=SqrtStretch())\n", " norm2 = ImageNormalize(calints_ref.data[0], \n", - " interval=MinMaxInterval(), \n", - " stretch=SqrtStretch())\n", + " interval=MinMaxInterval(), \n", + " stretch=SqrtStretch())\n", " # Display the images\n", - " fig, axs = plt.subplots(1,2, figsize=(9,5))\n", + " fig, axs = plt.subplots(1, 2, figsize=(9, 5))\n", " axs[0].set_title('Target Calints Data')\n", " im1 = axs[0].imshow(calints_targ.data[0], norm=norm1)\n", " axs[1].set_title('Ref Star Calints Data')\n", " im2 = axs[1].imshow(calints_ref.data[0], norm=norm2)\n", " \n", - " for im in [im1,im2]:\n", - " cbar = plt.colorbar(im, shrink=.9,location='bottom', pad=.05)\n", + " for im in [im1, im2]:\n", + " cbar = plt.colorbar(im, shrink=.9, location='bottom', pad=.05)\n", " cbar.ax.tick_params(axis='x', rotation=45)\n", " for ax in axs:\n", " ax.axis('off')\n", @@ -1162,22 +1156,21 @@ "# it must be specified in an ASDF file as shown here.\n", "\n", "# Create an ASDF file\n", - "asdf_affine2d = os.path.join(sci_dir,'affine2d_ideal.asdf')\n", + "asdf_affine2d = os.path.join(sci_dir, 'affine2d_ideal.asdf')\n", "\n", - "aff_tree = {\n", - "'mx': 1., # dimensionless x-magnification\n", - "'my': 1., # dimensionless y-magnification\n", - "'sx': 0., # dimensionless x shear\n", - "'sy': 0., # dimensionless y shear\n", - "'xo': 0., # x-offset in pupil space\n", - "'yo': 0., # y-offset in pupil space\n", - "'rotradccw': None }\n", + "aff_tree = {'mx': 1., # dimensionless x-magnification\n", + " 'my': 1., # dimensionless y-magnification\n", + " 'sx': 0., # dimensionless x shear\n", + " 'sy': 0., # dimensionless y shear\n", + " 'xo': 0., # x-offset in pupil space\n", + " 'yo': 0., # y-offset in pupil space\n", + " 'rotradccw': None}\n", "\n", "with open(asdf_affine2d, 'wb') as fh:\n", - " af = asdf.AsdfFile(aff_tree)\n", - " af.write_to(fh)\n", + " af = asdf.AsdfFile(aff_tree)\n", + " af.write_to(fh)\n", " \n", - "af.close()\n" + "af.close()" ] }, { @@ -1201,7 +1194,7 @@ "\n", "# Overrides for various reference files\n", "# Files should be in the base local directory or provide full path\n", - "#ami3dict['ami_analyze']['override_nrm'] = 'mynrmfile.fits' # NRM mask geometry file\n" + "#ami3dict['ami_analyze']['override_nrm'] = 'mynrmfile.fits' # NRM mask geometry file" ] }, { @@ -1343,7 +1336,7 @@ " with datamodels.open(ami_oi) as oi_f:\n", " # Check which steps were run\n", " for step, status in oi_f.meta.cal_step.instance.items():\n", - " print(f\"{step}: {status}\")\n" + " print(f\"{step}: {status}\")" ] }, { @@ -1418,12 +1411,11 @@ "outputs": [], "source": [ "# Fetch the AMI filter throughput model\n", - "refs = crds.getreferences({\n", - " 'INSTRUME' : calints_f.meta.instrument.name,\n", - " 'FILTER' : calints_f.meta.instrument.filter,\n", - " 'DATE-OBS' : calints_f.meta.observation.date,\n", - " 'TIME-OBS' : calints_f.meta.observation.time}, \n", - " reftypes=['throughput'])\n", + "refs = crds.getreferences({'INSTRUME': calints_f.meta.instrument.name,\n", + " 'FILTER': calints_f.meta.instrument.filter,\n", + " 'DATE-OBS': calints_f.meta.observation.date,\n", + " 'TIME-OBS': calints_f.meta.observation.time}, \n", + " reftypes=['throughput'])\n", "\n", "throughput_file = refs['throughput']\n", "throughput_model = datamodels.open(throughput_file)\n", @@ -1453,18 +1445,18 @@ "# The returned array has shape (nlambda, 2)\n", "nlamb = 19\n", "bandpass = utils.combine_src_filt(filt_spec,\n", - " spec_abdor,\n", - " trim=0.01,\n", - " nlambda=nlamb)\n", + " spec_abdor,\n", + " trim=0.01,\n", + " nlambda=nlamb)\n", "\n", "# Create an ASDF file\n", - "asdf_bandpass_abdor = os.path.join(ami3_dir_alt,'bandpass_f480m_abdor.asdf')\n", + "asdf_bandpass_abdor = os.path.join(ami3_dir_alt, 'bandpass_f480m_abdor.asdf')\n", "\n", "bp_tree = {\"bandpass\": bandpass}\n", "\n", "with open(asdf_bandpass_abdor, 'wb') as fh:\n", - " af = asdf.AsdfFile(bp_tree)\n", - " af.write_to(fh)\n", + " af = asdf.AsdfFile(bp_tree)\n", + " af.write_to(fh)\n", " \n", "af.close()" ] @@ -1489,18 +1481,18 @@ "# The retruned array has shape (nlambda, 2)\n", "nlamb = 19\n", "bandpass = utils.combine_src_filt(filt_spec,\n", - " spec_refstar,\n", - " trim=0.01,\n", - " nlambda=nlamb)\n", + " spec_refstar,\n", + " trim=0.01,\n", + " nlambda=nlamb)\n", "\n", "# Create an ASDF file\n", - "asdf_bandpass_refstar = os.path.join(ami3_dir_alt,'bandpass_f480m_hd37093.asdf')\n", + "asdf_bandpass_refstar = os.path.join(ami3_dir_alt, 'bandpass_f480m_hd37093.asdf')\n", "\n", "bp_tree = {\"bandpass\": bandpass}\n", "\n", "with open(asdf_bandpass_refstar, 'wb') as fh:\n", - " af = asdf.AsdfFile(bp_tree)\n", - " af.write_to(fh)\n", + " af = asdf.AsdfFile(bp_tree)\n", + " af.write_to(fh)\n", " \n", "af.close()\n", "throughput_model.close()" @@ -1531,7 +1523,7 @@ "source": [ "if doami3_independent:\n", " for (calints, asdf_bandpass) in zip(calints_files, [asdf_bandpass_abdor, asdf_bandpass_refstar]):\n", - " # Call the AmiAnalyze step on each file, bandpass\n", + " # Call the AmiAnalyze step on each data file, bandpass\n", " oimodel, oimodelmulti, lgmodel = AmiAnalyzeStep.call(calints,\n", " save_results=True,\n", " # firstfew=5, # Uncomment to speed up processing\n", @@ -1562,7 +1554,7 @@ " oifits = sorted(glob.glob(os.path.join(ami3_dir_alt, '*ami-oi.fits')))\n", " # Call the AmiNormalize step on the target, ref star\n", " # Arguments can be AmiOIModels or filenames\n", - " AmiNormalizeStep.call(oifits[0],oifits[1], output_dir=ami3_dir_alt, save_results=True)" + " AmiNormalizeStep.call(oifits[0], oifits[1], output_dir=ami3_dir_alt, save_results=True)" ] }, { @@ -1684,7 +1676,7 @@ "source": [ "if doviz:\n", " # Get the target and calibrator ami-oi.fits files\n", - " oifitslist = sorted(glob.glob(os.path.join(ami3_dir,\"*ami-oi.fits\")))\n", + " oifitslist = sorted(glob.glob(os.path.join(ami3_dir, \"*ami-oi.fits\")))\n", "\n", " # Open them as datamodels\n", " amioi_targ = datamodels.open(oifitslist[0])\n", @@ -1747,7 +1739,7 @@ " norm = ImageNormalize(amilgmodel.norm_centered_image[0], \n", " interval=MinMaxInterval(), \n", " stretch=SqrtStretch())\n", - " fig, axs = plt.subplots(1,3, figsize=(12,5))\n", + " fig, axs = plt.subplots(1, 3, figsize=(12, 5))\n", " axs[0].set_title('Normalized Data')\n", " im1 = axs[0].imshow(amilgmodel.norm_centered_image[0], norm=norm)\n", " axs[1].set_title('Normalized Model')\n", @@ -1755,8 +1747,8 @@ " axs[2].set_title('Normalized Residual (Data-Model)')\n", " im3 = axs[2].imshow(amilgmodel.norm_resid_image[0])\n", " \n", - " for im in [im1,im2,im3]:\n", - " plt.colorbar(im, shrink=.95,location='bottom', pad=.05)\n", + " for im in [im1, im2, im3]:\n", + " plt.colorbar(im, shrink=.95, location='bottom', pad=.05)\n", " for ax in axs:\n", " ax.axis('off')\n", " plt.tight_layout()" From 288f8f193c587e3ef7600e7964d34434260980a9 Mon Sep 17 00:00:00 2001 From: Rachel Cooper Date: Tue, 11 Mar 2025 15:32:24 -0400 Subject: [PATCH 4/6] More figure descriptions, labels --- .../NIRISS/AMI/JWPipeNB-niriss-ami.ipynb | 56 +++++++++++++++---- 1 file changed, 46 insertions(+), 10 deletions(-) diff --git a/notebooks/NIRISS/AMI/JWPipeNB-niriss-ami.ipynb b/notebooks/NIRISS/AMI/JWPipeNB-niriss-ami.ipynb index b38e330..c1ade72 100755 --- a/notebooks/NIRISS/AMI/JWPipeNB-niriss-ami.ipynb +++ b/notebooks/NIRISS/AMI/JWPipeNB-niriss-ami.ipynb @@ -621,13 +621,21 @@ "im2 = axs[1].imshow(uncal_ref.data[0][-1], norm=norm2)\n", "\n", "for im in [im1, im2]:\n", - " cbar = plt.colorbar(im, shrink=.9, location='bottom', pad=.05)\n", + " cbar = plt.colorbar(im, shrink=.9, location='bottom', pad=.05, label='DN')\n", " cbar.ax.tick_params(axis='x', rotation=45)\n", "for ax in axs:\n", " ax.axis('off')\n", "plt.tight_layout()" ] }, + { + "cell_type": "markdown", + "id": "9b3af10d-6233-4893-834e-0c8614686938", + "metadata": {}, + "source": [ + "The above image shows one group of one integration of the target and reference star's `uncal` data, displayed side by side on a square-root stretch. The source is faintly visible but the images appear quite noisy. The data is in units of DN (data number, or counts). The pixels with high values appear as bright dots; we expect them to be flagged as bad pixels during the first pipeline stage." + ] + }, { "cell_type": "code", "execution_count": null, @@ -863,7 +871,7 @@ " im2 = axs[1].imshow(rateints_ref.data[0], norm=norm2)\n", " \n", " for im in [im1, im2]:\n", - " cbar = plt.colorbar(im, shrink=.9, location='bottom', pad=.05)\n", + " cbar = plt.colorbar(im, shrink=.9, location='bottom', pad=.05, label='DN/s')\n", " cbar.ax.tick_params(axis='x', rotation=45)\n", " for ax in axs:\n", " ax.axis('off')\n", @@ -875,7 +883,7 @@ "id": "ae80024a-c9e6-4c8d-85f9-8718f3800e38", "metadata": {}, "source": [ - "The white pixels in the images above are those flagged as DO_NOT_USE by the pipeline, so their pixel values in the image are replaced with `NaN` values." + "The above image shows one slice of the target and reference star `rateints` data on a square-root stretch. The structure of the snowflake-shaped diffraction pattern is clearly visible now. Since ramp fitting has been performed, the units of these images are a rate: DN/s. The white pixels are those flagged as DO_NOT_USE by the pipeline, so their pixel values in the image are replaced with `NaN` values." ] }, { @@ -1072,13 +1080,21 @@ " im2 = axs[1].imshow(calints_ref.data[0], norm=norm2)\n", " \n", " for im in [im1, im2]:\n", - " cbar = plt.colorbar(im, shrink=.9, location='bottom', pad=.05)\n", + " cbar = plt.colorbar(im, shrink=.9, location='bottom', pad=.05, label='DN/s')\n", " cbar.ax.tick_params(axis='x', rotation=45)\n", " for ax in axs:\n", " ax.axis('off')\n", " plt.tight_layout()" ] }, + { + "cell_type": "markdown", + "id": "250d88c2-050b-4e97-9562-a8fafa4d5a3e", + "metadata": {}, + "source": [ + "The above image shows one slice of the target and reference star `calints` data on a square-root stretch. For AMI data, the only visible difference in the image outputs of `detector1` and `image2` pipelines is that they have been flat-fielded. The units of these images are still a rate: DN/s." + ] + }, { "cell_type": "markdown", "id": "3ef9155e", @@ -1127,7 +1143,7 @@ "\n", "Some optional input arguments to the `AmiAnalyze` step accept the name of an ASDF file containing a particular data tree. The default `bandpass` and `affine2d` arguments are currently generic. \n", "\n", - "An example of creating and using such a file is shown here; an example of creating and using the bandpass ASDF files is shown below in " + "An example of creating and using such a file is shown here; an example of creating and using the bandpass ASDF files is shown below. " ] }, { @@ -1504,7 +1520,7 @@ "metadata": {}, "source": [ "### Call the AmiAnalyzeStep\n", - "Run on the target and calibrator `*calints.fits` files, using the corresponding bandpass file for each.\n", + "Run on the target and reference star `*calints.fits` files, using the corresponding bandpass file for each.\n", "\n", "When called individually rather than as part of the `calwebb_ami3 pipeline`, the `AmiAnalyzeStep` saves two additional types of data products. For each input file, the step produces:\n", "* `*ami-oi.fits` file containing of observables averaged over all integrations of the input file. The error is taken to be the standard error of the mean, where the variance is the covariance between amplitudes and phases (e.g. fringe amplitudes and fringe phases, closure phases and triple-product amplitudes).\n", @@ -1664,7 +1680,7 @@ "id": "f8f7254b-b679-478d-91ed-56cb52981dfd", "metadata": {}, "source": [ - "First we'll look at the target and calibrator's squared visibilities and closure phases. We plot the squared visibilities against their corresponding baselines (units of meters projected back to the primary meter). We plot the closure phases against the longest of the three baselines that forms the \"closure triangle\" whose phases were summed to calculate the closure phase." + "First we'll look at the target and reference star's squared visibilities and closure phases. We plot the squared visibilities against their corresponding baselines (units of meters projected back to the primary meter). We plot the closure phases against the longest of the three baselines that forms the \"closure triangle\" whose phases were summed to calculate the closure phase." ] }, { @@ -1675,7 +1691,7 @@ "outputs": [], "source": [ "if doviz:\n", - " # Get the target and calibrator ami-oi.fits files\n", + " # Get the target and reference star's ami-oi.fits files\n", " oifitslist = sorted(glob.glob(os.path.join(ami3_dir, \"*ami-oi.fits\")))\n", "\n", " # Open them as datamodels\n", @@ -1687,12 +1703,24 @@ " plot_observables(amioi_ref)" ] }, + { + "cell_type": "markdown", + "id": "0de403d0-20c3-4623-93ed-5137462d901b", + "metadata": {}, + "source": [ + "The top two scatter plots show the observables from the target observation, and the lower two show the observables from the reference star. For a perfect point source observation at single wavelength, we would expect to recover closure phases of zero and squared visibilites of unity. \n", + "\n", + "Since the closure phases are sensitive to asymmetries in the source brightness distribution, we can tell from the larger scatter of the target that it is likely not a point source; i.e, there is a faint companion. On the other hand, the reference star has closure phases with a much smaller scatter around zero. The squared visibilities of both decrease at longer wavelengths due to an effect called bandpass smearing. We expect that calibrating the target by the reference star should correct for this, as well as other systematics." + ] + }, { "cell_type": "markdown", "id": "424dcd6b-ae89-44ba-9c71-cd6fe1c35f82", "metadata": {}, "source": [ - "Now we will plot the final calibrated data product; the target normalized by the reference star. The reference star closure phases are subtracted from the target closure phases, and the target squared visibilities are divided by the reference star squared visibilities. These interferometric observables can then be analyzed to recover information about the observed scene. (reference specific packages?)" + "Now we will plot the final calibrated data product; the target normalized by the reference star. The reference star closure phases are subtracted from the target closure phases, and the target squared visibilities are divided by the reference star squared visibilities. \n", + "\n", + "Further scientific analysis on these calibrated OIFITS files can be done with community-developed analysis software like [CANDID](https://github.com/amerand/CANDID) ([Gallenne et al. 2015](https://ui.adsabs.harvard.edu/abs/2015A%26A...579A..68G/abstract)) or [Fouriever](https://github.com/kammerje/fouriever) to extract binary parameters, or an image reconstruction code like [SQUEEZE](https://github.com/fabienbaron/squeeze) ([Baron et al. 2010](https://ui.adsabs.harvard.edu/abs/2010SPIE.7734E..2IB/abstract)) or BSMEM ([Skilling & Bryan 1984](https://ui.adsabs.harvard.edu/abs/1984MNRAS.211..111S/abstract), [Buscher 1994](https://ui.adsabs.harvard.edu/abs/1994IAUS..158...91B/abstract), [Baron & Young 2008](https://ui.adsabs.harvard.edu/abs/2008SPIE.7013E..3XB/abstract))." ] }, { @@ -1718,7 +1746,7 @@ "source": [ "### Display the best-fit model\n", "\n", - "We can also look at the cleaned data, model, and residual images that are saved in the auxiliary `*amilg.fits` data products. Each image has been normalized by the peak pixel value of the data, and the data and model are displayed on a square root stretch to emphasize the fainter features." + "We can also look at the cleaned data, model, and residual images that are saved in the auxiliary `*amilg.fits` data products:" ] }, { @@ -1754,6 +1782,14 @@ " plt.tight_layout()" ] }, + { + "cell_type": "markdown", + "id": "7ad908f8-90c9-4df4-8f52-2c1087a092db", + "metadata": {}, + "source": [ + "Each image has been normalized by the peak pixel value of the data, and the data and model are displayed on a square root stretch to emphasize the fainter features. By looking at the residual (data - model) image, we can see that the model is a good fit to the data. This model achieves better contrast than ground-based NRM, but has not reached the binary contrast science requirements of AMI. The faint vertical striping in the background of the residual image is 1/f noise (flicker noise), which is an active area of improvement for the NIRISS/AMI team, as is the best method of correcting for charge migration. " + ] + }, { "cell_type": "markdown", "id": "89ee8944", From 94816623fa660eefa82aa5ab8f2d9e73a04bb0c2 Mon Sep 17 00:00:00 2001 From: Rachel Cooper Date: Thu, 13 Mar 2025 14:53:07 -0400 Subject: [PATCH 5/6] More pep8 fixes --- .../NIRISS/AMI/JWPipeNB-niriss-ami.ipynb | 44 +++++++++---------- 1 file changed, 22 insertions(+), 22 deletions(-) diff --git a/notebooks/NIRISS/AMI/JWPipeNB-niriss-ami.ipynb b/notebooks/NIRISS/AMI/JWPipeNB-niriss-ami.ipynb index c1ade72..10f64f5 100755 --- a/notebooks/NIRISS/AMI/JWPipeNB-niriss-ami.ipynb +++ b/notebooks/NIRISS/AMI/JWPipeNB-niriss-ami.ipynb @@ -16,7 +16,7 @@ "##### NIRISS AMI Pipeline Notebook\n", "\n", "**Authors**: R. Cooper
\n", - "**Last Updated**: January 1, 2025
\n", + "**Last Updated**: March 13, 2025
\n", "**Pipeline Version**: 1.17.0 (Build 11.2)" ] }, @@ -607,11 +607,11 @@ "source": [ "# Image normalization (square root stretch)\n", "norm1 = ImageNormalize(uncal_targ.data[0][-1], \n", - " interval=MinMaxInterval(), \n", - " stretch=SqrtStretch())\n", + " interval=MinMaxInterval(), \n", + " stretch=SqrtStretch())\n", "norm2 = ImageNormalize(uncal_ref.data[0][-1], \n", - " interval=MinMaxInterval(), \n", - " stretch=SqrtStretch())\n", + " interval=MinMaxInterval(), \n", + " stretch=SqrtStretch())\n", "\n", "# Display the images\n", "fig, axs = plt.subplots(1, 2, figsize=(9, 5))\n", @@ -725,8 +725,8 @@ "\n", "# Step names are copied here for reference\n", "det1_steps = ['group_scale', 'dq_init', 'saturation', 'ipc', 'superbias', 'refpix',\n", - " 'linearity', 'persistence', 'dark_current', 'charge_migration',\n", - " 'jump', 'ramp_fit', 'gain_scale']\n", + " 'linearity', 'persistence', 'dark_current', 'charge_migration',\n", + " 'jump', 'ramp_fit', 'gain_scale']\n", "\n", "# Overrides for whether or not certain steps should be skipped\n", "# skipping the ipc, persistence, and dark steps\n", @@ -859,11 +859,11 @@ "source": [ "if dodet1:\n", " norm1 = ImageNormalize(rateints_targ.data[0], \n", - " interval=MinMaxInterval(), \n", - " stretch=SqrtStretch())\n", + " interval=MinMaxInterval(), \n", + " stretch=SqrtStretch())\n", " norm2 = ImageNormalize(rateints_ref.data[0], \n", - " interval=MinMaxInterval(), \n", - " stretch=SqrtStretch())\n", + " interval=MinMaxInterval(), \n", + " stretch=SqrtStretch())\n", " fig, axs = plt.subplots(1, 2, figsize=(9, 5))\n", " axs[0].set_title('Target Rateints Data')\n", " im1 = axs[0].imshow(rateints_targ.data[0], norm=norm1)\n", @@ -1067,11 +1067,11 @@ "if doimage2:\n", " # Image normalization (square root stretch)\n", " norm1 = ImageNormalize(calints_targ.data[0], \n", - " interval=MinMaxInterval(), \n", - " stretch=SqrtStretch())\n", + " interval=MinMaxInterval(), \n", + " stretch=SqrtStretch())\n", " norm2 = ImageNormalize(calints_ref.data[0], \n", - " interval=MinMaxInterval(), \n", - " stretch=SqrtStretch())\n", + " interval=MinMaxInterval(), \n", + " stretch=SqrtStretch())\n", " # Display the images\n", " fig, axs = plt.subplots(1, 2, figsize=(9, 5))\n", " axs[0].set_title('Target Calints Data')\n", @@ -1431,7 +1431,7 @@ " 'FILTER': calints_f.meta.instrument.filter,\n", " 'DATE-OBS': calints_f.meta.observation.date,\n", " 'TIME-OBS': calints_f.meta.observation.time}, \n", - " reftypes=['throughput'])\n", + " reftypes=['throughput'])\n", "\n", "throughput_file = refs['throughput']\n", "throughput_model = datamodels.open(throughput_file)\n", @@ -1461,9 +1461,9 @@ "# The returned array has shape (nlambda, 2)\n", "nlamb = 19\n", "bandpass = utils.combine_src_filt(filt_spec,\n", - " spec_abdor,\n", - " trim=0.01,\n", - " nlambda=nlamb)\n", + " spec_abdor,\n", + " trim=0.01,\n", + " nlambda=nlamb)\n", "\n", "# Create an ASDF file\n", "asdf_bandpass_abdor = os.path.join(ami3_dir_alt, 'bandpass_f480m_abdor.asdf')\n", @@ -1497,9 +1497,9 @@ "# The retruned array has shape (nlambda, 2)\n", "nlamb = 19\n", "bandpass = utils.combine_src_filt(filt_spec,\n", - " spec_refstar,\n", - " trim=0.01,\n", - " nlambda=nlamb)\n", + " spec_refstar,\n", + " trim=0.01,\n", + " nlambda=nlamb)\n", "\n", "# Create an ASDF file\n", "asdf_bandpass_refstar = os.path.join(ami3_dir_alt, 'bandpass_f480m_hd37093.asdf')\n", From d531f1c890793bc7e35f8270c64e8b2219a842b1 Mon Sep 17 00:00:00 2001 From: Rachel Cooper Date: Thu, 13 Mar 2025 15:03:56 -0400 Subject: [PATCH 6/6] removed duplicate cell --- notebooks/NIRISS/AMI/JWPipeNB-niriss-ami.ipynb | 11 ----------- 1 file changed, 11 deletions(-) diff --git a/notebooks/NIRISS/AMI/JWPipeNB-niriss-ami.ipynb b/notebooks/NIRISS/AMI/JWPipeNB-niriss-ami.ipynb index 10f64f5..8c747f1 100755 --- a/notebooks/NIRISS/AMI/JWPipeNB-niriss-ami.ipynb +++ b/notebooks/NIRISS/AMI/JWPipeNB-niriss-ami.ipynb @@ -1146,17 +1146,6 @@ "An example of creating and using such a file is shown here; an example of creating and using the bandpass ASDF files is shown below. " ] }, - { - "cell_type": "code", - "execution_count": null, - "id": "605f4b5a-6c1d-4804-9a57-993b46089094", - "metadata": {}, - "outputs": [], - "source": [ - "calints_files = sorted(glob.glob(os.path.join(image2_dir, '*_calints.fits')))\n", - "calints_f = datamodels.open(calints_files[0])" - ] - }, { "cell_type": "code", "execution_count": null,