4th Copernicus Emergency Management Service and Global Flood Forecasting and Monitoring Meeting

This is event is available for participation on an ongoing basis

Registration is now open for the 4th Copernicus Emergency Management Service (CEMS) Global Flood Forecasting and Monitoring Meeting. This online event will take place on April 2nd and 3rd, 2025, from 13:00 to 16:30 UTC each day.

0
0
No certificate issued or not known
0.00
USD
04/02/2025, 12:00am - 04/03/2025, 12:00am
Visible
1

68th Session on the Committee on the Peaceful Uses of Outer Space (COPUOS)

This is event is available for participation on an ongoing basis

The Committee on the Peaceful Uses of Outer Space (COPUOS) was set up by the General Assembly in 1959 to govern the exploration and use of space for the benefit of all humanity: for peace, security and development. The Committee was tasked with reviewing international cooperation in peaceful uses of outer space, studying space-related activities that could be undertaken by the United Nations, encouraging space research programmes, and studying legal problems arising from the exploration of outer space.

0
0
No certificate issued or not known
0.00
USD
06/25/2025, 12:00am - 07/04/2025, 12:00am
Visible
Vienna International Centre
Vienna

United Nations Office for Outer Space Affairs (UNOOSA)

ERATOSTHENES Centre of Excellence

0
0

The ERATOSTHENES Centre of Excellence (https://eratosthenes.org.cy/) was established in 2020 through the EXCELSIOR H2020 Widespread Teaming project (https://excelsior2020.eu/) after upgrading the existing Remote Sensing and Geo-Environment Lab that operates at the Department of Civil Engineering and Geomatics of the Cyprus University of Technology since 2007. ERATOSTHENES Centre of Excellence (CoE) aspires to become a world-class Digital Innovation Hub and a reference Centre for Earth Observation, Space technology and Geospatial Information in the Eastern Mediterranean, Middle East, and North Africa (EMMENA) region.

The ERATOSTHENES CoE as a Digital Innovation Hub (DIH) adopts a two-axis model. The vertical axis consists of three Thematic Departments for sustained excellence in research of the ERATOSTHENES CoE, i.e., Environment & Climate, Resilient Society and Big Earth Data Analytics, whereas the horizontal axis consists of four functional areas, i.e., Infrastructure, Research, Education and Entrepreneurship. The DIH will create an ecosystem which combines state-of-the-art remote sensing, data management and processing technologies, cutting – edge research opportunities, targeted education services and promotion of entrepreneurship. In order to be dynamic and innovative, the Digital Innovation Hub will be based on two flagship infrastructures (https://eratosthenes.org.cy/functional-areas/infrastructure-functional-area/), a Satellite Ground Receiving Station and a Ground-based atmospheric remote sensing station.

ERATOSTHENES CoE hosts a multi-disciplinary team of highly skilled researchers and engineers with numerous publications in peer reviewed journals, nominations and awards, as well research projects funded from various European and National funding sources in different thematic areas, such as Environment & Climate (Atmosphere, Agriculture, Water, Land), Resilient Society (Disaster Risk Reduction, Cultural Heritage, Marine Safety & Security, Energy) and Big Earth Data Analytics (Information Extraction, Visual Exploration & Visualization, Crowdsourcing & Data Fusion, Geoinformatics.). The activities of the Disaster Risk Reduction Cluster of the ERATOSTHENES CoE, that are directly aligned to the goals of UN-SPIDER, involve the systematic monitoring of hazards, the development of Early Warning and Decision Support Systems dealing with earthquakes, landslides, coastal/soil erosions, forest fires, floods, drought and epidemics.

For more information on the host institution, visit: https://eratosthenes.org.cy/

The ERATOSTHENES Centre of Excellence (CoE) engages with the complete ecosystem of stakeholders in a Multi-Actor approach, linking with players segmented according to their geographic location (from central Europe, to South-Eastern Europe, to EMMENA region), their position in the EO value chain (from EO data providers, to science laboratories and research institutes, to SMEs and large industries) and their mandate (from Public Sector, to sectorial coordination organizations, to economic development banks, etc.).

In this framework, ERATOSTHENES CoE provide capacity building for the professional development of public and private stakeholders in Cyprus and beyond with a focus on equipping and facilitating scientific and research personnel in the field of Earth Observation and Geoinformatics. The programme acts as a regional multiplier in the EMMENA, educating the new generation of scientists and motivating them to create new businesses, capitalizing on innovative research. In addition to the ERATOSTHENES CoE staff, governmental departments, private companies and end-users can benefit from the EO professional training schemes. Moreover, ERATOSTHENES CoE is a certified Vocational and Education Training (VET) Centre under the Cyprus Human Resource Development Authority (HRDA), capable to carry out subsidized, co-financed and/or advertised by HRDA training activities.

Last but not least, ERATOSTHENES CoE offers, in collaboration with the Cyprus University of Technology, a MSc degree in Geoinformatics and Earth Observation, and PhD students at Cyprus University of Technology have access to the ERATOSTHENES CoE facilities and infrastructure to conduct their research.

Address : Franklin Roosevelt 82, Limassol 3012, Cyprus

Telephone: +3525002908

Email: contact [at] eratosthenes.org.cy (contact[at]eratosthenes[dot]org[dot]cy)

Dr. Marios Tzouvaras - Research Coordinator | Disaster Risk Reduction Cluster Leader

Telephone: +357-25002006 

Email Address: marios.tzouvaras [at] eratosthenes.org.cy

Step-by-Step: Flood Mapping with Sentinel-1 Interferometric Coherence

The Step-by-Step Explanation is also available as a PDF, which can be downloaded.

1. Downloading the Scenes from the Alaska Satellite Facility 

The Alaska Satellite Facility (ASF) downlinks, processes, archives, and distributes remote-sensing data to scientific users worldwide. It is a convenient platform to download Sentinel-1 SLC data: 

ASF Data Search 

Each scene has 4 to 5 GB. Based on your AOI, one orbit should be selected, in which all scenes are located. Three S-1 Scenes are necessary: two pre-event scenes and one post-event scene. The scenes need to be of file type L1 Single Look Complex (SLC). 

Drawing the AOI: 

After loading the page, you can directly start drawing your AOI. Then use the “Filters” option to select the date range. Under “Additional Filters” select “L1 Single Look Complex (SLC)” as file type. If you already know the path, specify it in the section “Path and Frame Filters”. 

How to use the Filters: 

  • Select Start Date and End Date: they should be apart by at least one month, end date should be shortly after the disaster happened. 

  • File Type: select L1 Single Look Complex 

  • Path and Frame filters can be specified later 

  • Click „Apply“ 


 

After clicking on apply, you will see the scenes (in blue) covering your AOI (yellow/orange). By hovering above the unique rectangles, you will see to what extent they cover your AOI (red). Decide for one rectangle and click on it. In the scene list, you will see it highlighted. You will also see the Path and Frame of the Scene. Use these numbers to go again into “Filters” and add them to path start/end and frame start/end. After clicking again on SEARCH, you will only see one rectangle left on the map. 

All three scenes need to be in the same rectangle, i.e., they need to have the same path and frame! 





Now, with the shopping cart icon you can add the three scenes to the download section. 


 

In the download tab, you can start the download of the scenes, by clicking on the cloud icon. This will take some time as each scene has between 3 to 5 GB.  

So please make also sure that there is enough disk space available on your computer. 

 

 



 

2. Processing the Scenes with a pre-defined workflow in SNAP 

  • Drag and Drop the three Scenes (zipped) into the Product Explorer Window in SNAP. 

  • Click on the Graph Builder tool in the tool bar. On the bottom of the Graph Builder Window, click on „Load“. Browse to the directory, where you stored the Coherence.xml file. Double click to open it. 



 

This is how the loaded Graph should look like. Essentially, this is a processing pipeline, for the three input SLC scenes. 

The individual processing steps in the blue boxes can be also found in the normal task bar. 


 

A few things need to be selected manually in the graph window: 

  1. For the three Read tabs, the Sentinel-1 scenes need to be selected. This has to be in the right order: 
    Read = Pre-Event 1  
    Read(2) = Post-Event 
    Read(3) = Pre-Event 2 

  2. TOPSAR-Split Tab, select the Subswath and the bursts. The positioning of the subsets can be seen in the map on the bottom of the window. Apply the same selection to all the TOPSAR-Split tabs (2) and (3). 

TOPSAR Split (In Detail) 


 

Sentinel-1 scenes, that can be used for interferometry, are captured in three sub-swaths (IW1, IW2, IW3), using Terrain Observation with Progressive Scans SAR (TOPSAR). Each sub-swath image consits of a series of bursts, where each burst is processed as a separate image. 

That is why we first split the scene, and later deburst it, i.e. put it back together into one image with the TOPSAR Deburst tool. 

A few things need to be selected manually in the graph window: 

In the Terrain-Correction Tab, you can control the output layers, which should be named: 

coh_IWx_VV_pre-Event2_pre-Event1 
coh_IWx_VV_pre-Event2_post-Event 
If you also selected the VH polarization, two more layers will appear with VH instead of VV in their names. 
If the dates are switched, go back to the Read Tabs, and control the selection order described in step 4. 


 

A few things need to be selected manually in the graph window: 

  • In the Write Tab, select the destination directory, to write the output to. 

  • Click on Run. 

The execution and the calculation of the coherence will take some time. Depending on your computation capacity, it can take up to a few hours. 

If the graph does not work, you can also execute the steps individually in SNAP. You will need more storage capacity on your machine for that, as interim products need to be stored after each step. 

  1. Apply Orbit File (10 min) 

  2. S-1 TOPS Split (select only VV polarization, up to 3 bursts in one IW swath) 

  3. Coregistration: S-1 Back Geocoding (DEM: COP-30; uncheck “Mask out areas with no elevation”) 

  4. Coherence Estimation (square pixel size?

  5. Deburst 

  6. Speckle Filtering (median filter?

  7. Terrain Correction (DEM: COP-30, Map projection: Auto UTM, include layover shadow mask as output band) 

3. Exporting the Coherence GeoTIFF 

After the processing has finished you can close the graph builder window. In the product explorer window on the left, you will find a new product [4], called  

S1A_IW_SLC__1SDV_xxxx_xxxx_xxxx_xxxx_xxxx_Orb_Stack_Coh_Deb_ML_TC 

Click on the product, then go to File on the Task bar and select Export. Select GeoTIFF. Choose a directory for your export and click Export Product

4. Importing the Coherence raster into Google Earth Engine 

  • Open the following Google Earth Engine Script: 

  • In the left panel, go to the Assets Tab, click on the red NEW button, under Image Upload select “GeoTIFF (.tif,.tiff) or TFRecord (.tfrecord + .json).”  

  • Select the exported tif from the previous step, by clicking on SELECT and give a name to the file under Asset Name. Click on UPLOAD.  

  • Under tasks, you will see the upload progressing. Afterwards click on “Run.”



 

  • Under the Assets tab on the left side, you will see your raster under CLOUD ASSETS. If it does not show up, click on the Refresh Icon.  

  • By hovering above the file, you will see an arrow, by clicking on which, you can import the raster into the script. 

In Detail: Flood Mapping with Sentinel-1 Interferometric Coherence

Flood detection in urban areas is a major limitation of flood detection approaches using SAR backscatter. This is problematic for the disaster response community, as urban areas are also where most damage happens and the most people are exposed to the disaster, considering that the majority population lives nowadays in cities rather than rural areas, the trend increasing. 

Input Data:

  • Sentinel-1 SLC data
  • Download recommended via the Alaska Satellite Facility

Sofware:

  • SNAP
  • Google Earth Engine 

The proposed practice can be well applied in dense urban areas, like cities but also in areas with little to no vegetation cover. In these areas, mudflows, landslides, and flow paths of flash floods can be mapped, although this is not the objective of this practice. 

Strengths:

  • Independant of clouds
  • Detection of changes at sub-pixel level, which is a major advantage in dense urban settlements
  • Detection of floods, infrastructure damage and other surface changes

Limitations:

  • The practice is limited in detecting floods in vegetated areas like forests and agricultural areas
  • Short time span between the SAR image acquisitions essential
  • No cloud computing options available, selection of ROI needed, no large-scale computations on local machines feasible.
  • SLC scenes need a lot of storage capacity, i.e., Up to 5GB per scene. 

Pelich, Ramona; Chini, Marco; Hostache, Renaud; Matgen, Patrick; Pulvirenti, Luca; Pierdicca, Nazzareno (2022): Mapping Floods in Urban Areas from Dual-Polarization InSAR Coherence Data. In IEEE Geosci. Remote Sensing Lett. 19, pp. 1–5. DOI: 10.1109/LGRS.2021.3110132.

 

A close-up of several icons

Description automatically generated
 

  1. Downloading the Scenes from the Alaska Satellite Facility 

  2. Processing the Scenes with a pre-defined workflow in SNAP 

  3. Exporting the Coherence GeoTiff 

  4. Importing the Coherence raster into Google Earth Engine 

  5. Change Detection in Urban Areas 

  6. Map of flooded/damaged areas 

Step-by-Step: Flood Mapping with Sentinel-1 and Sentinel-2 Imagery and Digital Terrain Models

Before the Flood 

There are multiple steps in this practice, you can already perform before a flood hits your Area of Interest (AOI). This will help you to safe crucial time during the disaster, to ensure a rapid response. 

1. Create your AOI on GFM 

First, make a user account on GFM: Global Flood Monitoring

To access the data, you need to create an AOI, which will be stored in your GFM account. You need to assign a name and a description to your AOI. Then click on “Next step”. 

In the next step, you can choose between giving the Coordinates to your AOI, drawing the AOI or selecting a region. Most of the time, drawing the AOI is the best idea. Therefore, select the square to the right of the panel, draw the AOI and select Save AOI. 



 

2. Check GloFAS regularly 

To know when floods are likely to hit your region, check GloFAS regularly. Check if there are any unusually high precipitations forecasted. 

3. Download the FABDEM of your AOI 

The FABDEM is a 30m open-access Digital Terrain Model. You can find all necessary information in this Data Application of the Month or in this Scientific Publication: https://iopscience.iop.org/article/10.1088/1748-9326/ac4d4f/meta. 

Among other options, FABDEM can be downloaded with Google Earth Engine. With the link below, you can download the FABDEM for your AOI. By clicking on the rectangle tool, you can draw a polygon of your AOI. Then you can click on Run, to run the script. Under tasks, you will see FABDEM popping up. Click on RUN. When the task is finished, you can download the DTM from your Google Drive. 

Using other DTMs, for example a high-resolution national DTM or commercial DTMs is also possible. The spatial resolution and accuracy of the DTM is one of the key points to the success of this practice. 



https://code.earthengine.google.com/78b037560aa4567b5d950d228531844f  
 

4. Setup Python 

This is a step-by-step procedure to setup the Python environment, in which the FLEXTH algorithm will work. If you already have a Python installation on your computer, it still makes sence to follow this guide, to ensure all libraries are installed properly. 

We need to install the latest version of Miniforge, which is a light-weight Python distribution. Follow this Link, which leads you to a repository storing the Miniforge3 installers for the different operating sytems (OS). If you are using a Windows 64bit machine, click on „latest“ in the respective row. On the next page look for „Miniforge3-Windows-x86_64.exe“. Click on the link to download the executable. 



 

Double clicks the executable to start the setup. In the setup menu, click on „Next“ , then on „I agree“. Install for „Just Me (recommended)“. 

For the Install Location, you can just go with the default folder. Click „next“. For the Advanced Installation Options, you can also go with the default selection. Click „install“. 

After the installation click „next“ and then „Finish“. Open the Miniforge Prompt. This is a command line interface. First, we create a new virtual environment for the practice called „flexth_env“. Then we install the necessary libraries into the new environment: 


Press Enter after every line of code. You will be asked to confirm some changes with a „Y“ and pressing Enter. Care, that you don‘t adds or miss any spaces in the text. 

After activating the environment, you should see the name of the environment in brackets at the beginning of the line. A Python interface called Jupyter Lab will open after the last line of code. 

The installation of the libraries will take some time. You will see a progress bar in the terminal window, and it will print „done“ once the installation is terminated. 


 

5. Make yourself familiar with the FLEXTH tool 

If everything worked out, you are now able to open Jupyter Lab by writing jupyterlab into the console and executing the command (the last line in the previous code).  

The script is divided into 6 sections: 

  • Loading necessary libraries. 

  • Specifying user-specific input and output directories. 

  • Selecting the AOI and DTM sources. 

  • Setting further Parameters 

  • Mosaicing and Reprojecting GFM outputs. 

  • FLEXTH 

In Jupyter Lab, on the left side, you can see your directories. Browse to the directory of the FLEXTH script “GFM2FLEXTHnb.ipynb”. Go to the first cell and click on the “Run current Cell” symbol on the task bar. 

If the execution of the first cell works without any error messages, you will read “Libraries are loaded” in the console. 

Now you are prepared to use the tool in case of a flood. 

Troubleshooting 

  • If you don‘t have enough RAM on your machine, you will get the error message: …unable to allocate … for an array with size. Consider tiling your input data, and if you did already, consider reducing the tile size 

  • The code does not overwrite the rasters automatically. You have to delete the mosaics and reprojected rasters before reprocessing the inputs (cell …). Or you move them to another directory. 

13. Visualize and Interprete the Output 

By executing the last cell 7. Quick Output Summary, the area flooded before FLEXTH, and the area flooded after FLEXTH will be output. 

https://www.un-spider.org/sites/default/files/RP_UN-SPIDER_logo_1_4.png

In Detail: Flood Mapping with Sentinel-1 and Sentinel-2 Imagery and Digital Terrain Models

Flood mapping in urban areas poses significant challenges for Synthetic Aperture Radar (SAR) sensors due to limitations in detecting water in regions characterized by dense vegetation, urban infrastructure, or complex surface conditions. These limitations include reduced sensitivity in vegetated or built-up areas and water-like backscatter effects on smooth, dry, or snow-covered surfaces. The Global Flood Monitoring (GFM) platform employs Sentinel-1 SAR backscatter data for automated flood delineation but recognizes the constraints posed by such conditions. To address these challenges, the GFM has integrated an exclusion mask that highlights regions prone to SAR-based misclassification.  

This recommended practice introduces a novel algorithm developed by the Joint Research Centre of the European Commission that combines SAR-derived flood layers with digital terrain models, the GFM exclusion mask and incomplete Sentinel-2 flood maps. By leveraging DTMs, water depth calculations and hydrodynamic propagation models are applied to infer flood conditions within exclusion mask areas, enhancing the reliability of flood extent delineations. 

Input Data:

  • GFM outputs for a flooded Area of Interest:
  • Flooded Area
  • Permanent and Seasonal Water Bodies
  • Exclusion Mask
  • Digital Terrain Model
  • Sentinel-2 flood map and cloud mask (calculated in GEE during this practice)

Sofware:

  • Global Flood Monitoring Database
  • Python
  • QGIS or other GIS software 

This practice can be used for any area with major flooding. The practice is especially relevant for large floodings, that extend into cities and vegetated areas, like forests and agricultural fields.

The contribution through the incomplete Sentinel-2 flood mask is especially important, if GFM underestimates the flooded area. In this context, the practice utilizes two incomplete and not perfect flood maps and combines them to get the best result possible. 

Strengths:

  • The approach is based on physics and the actual topography of the area. That means it overcomes many limitations associated with satellite imagery.
  • Utilizing two data sources strengthens the practice and makes it robust to several application  

Limitations:

  • The greatest limitations of the practice come from the spatial resolution and accuracy of the input flood delineation and of the Digital Terrain Model.
  • The quality of the output is dependent on the number of flooded pixels, provided with the input flood delineation. 

Hawker, Laurence; Uhe, Peter; Paulo, Luntadila; Sosa, Jeison; Savage, James; Sampson, Christopher; Neal, Jeffrey (2022): A 30 m global map of elevation with forests and buildings removed. In Environ. Res. Lett. 17 (2), p. 24016. DOI: 10.1088/1748-9326/ac4d4f.

Betterle, Andrea; Salamon, Peter (2024): Water depth estimate and flood extent enhancement for satellite-based inundation maps. In Nat. Hazards Earth Syst. Sci. 24 (8), pp. 2817–2836. DOI: 10.5194/nhess-24-2817-2024.

Expert Flood Monitoring Alliance, McCormick, N., Salamon, P., Global Flood Monitoring (GFM) – Product User Manual. European Commission. 2023. 

https://www.un-spider.org/sites/default/files/RP_UN-SPIDER_logo_1_4.png

The workflow can be divided into two sections: 1) Preparedness before the flood and 2) Response after the flood.

Its implementation is done in GFM (Global Flood Monitoring Database), Python, and in the Google Earth Engine. 

A diagram of a flowchart

Description automatically generated