southernocean.ai


#Southern Ocean AI | Artificial Intelligence for Southern Ocean


#picknik.ai | Remote Robot Control


#Alan Turing Institute | Developing digital twin of Antarctica | Identified icebergs using AI algoritms on satellite images


#British Antarctic Survey (BAS) AI Lab | Iceberg-detection using AI algoritms on SAR satellite images of polar oceans


#Natural Environment Research Council (NERC) | Investing in environmental science


#SEA.AI | Detecting floating objects early | Using thermal and optical cameras to catch also objects escaping conventional ­systems such as Radar or AIS: Unsignalled crafts or other floating obstacles, e.g., containers, tree trunks, buoys, inflatables, kayaks, persons over board | System computes input from lowlight and thermal cameras, using Machine Vision technology, deep learning capabilities and proprietary database of millions of annotated marine objects | High-resolution lowlight and thermal cameras | Real-time learning of water surface patterns | Searching for anomalies | Distinguishing water from non-water | Comparing anomalies with neural network | Recognize objects by matching combination of filters | Augmented reality video stream combined with map view | Intelligent alarming based on threat level | Detecting persons in water | On-board cameras with integrated image processing | Providing digital understanding of vessel surroundings on water | SEA.AI App on smartphone or tablet


#Australian Securities Exchange (ASX) | Listings | Markets | Technology | Data | Securities


#SubcImaging | Cameras | Lights | Systems | Software | Lasers


#uWare Robotics | Autonomous underwater vehicles (AUVs)


#Sea Machines | Artificial Intelligence Recognition and Identification System | Detects, tracks, classifies and geolocates objects, vessel traffic and other potential obstacles


#Advanced Navigation | AI-based marine navigation systems | AI-Based underwater navigation solutions and robotics technology | Hydrography | Underwater acoustic positioning solutions | Autonomous Underwater Vehicle (AUV) | Inertial navigation systems (INS) | Sidney, Australia


#Ommatidia Lidar | Ocean observation | 3D Light Sensor | In-orbit characterization of large deployable reflectors (LDRs) | Channels: 128 parallel | Imaging vibrometry functionality | Target accuracy: 10µm | Measurement range: 0.5-20 m | Measurement accuracy (MPE): 20 + 6 μ/m | Angular range 30 x 360 | Vibrometry sampling frequenvy: 40 kHz | Vibrometry max in-band velocity: 15.5 mm/s | Power consumption: 45W | Battery operation time: 240 min | Interface: Ethernet | Format: CSV / VKT / STL / PLY / TXT | Dimension: 150x228x382 mm | Weight: 7,5 kg | Pointer: ~633 nm | Temperature range: 0/40 ºC | Environmental protection class: IP54 | Eye safety: Class 1M | Raw point clouds: over 1 million points | Calibration: metrology-grade with compensation of thermal and atmospheric effects | ESA


#Heliogen | Decarbonizing industry with concentrated sunlight


#Ross Sea Regional Working Group (REG) | Assisting delivery of coordinated and standardised observations of essential variables in Ross Sea


#Southern Ocean Observing System (SOOS) | Facilitating and Enhancing Global Southern Ocean Observations


#Portland State University | Primary Productivity in the Indian Sector of the Southern Ocean: Observations from Three Austral Summers


#University of Tasmania | Multidisciplinary Investigation of the Southern Ocean


#LookOut | AI vision system | Synthesized data from charts, AIS, computer vision, and cloud fusing it into one 3D augmented reality view | Connects to existing boat display | Mountable camera system to the top of any boat | Lookout App for laptop, phone or tablet | Infrared vision | Night vision sensor | Spotting small vessels, floating debris, buoys, people in water | Blind spot detection | Backup camera | Temperature breaks, bird cluster locations, underwater structures for anglers | Camera streaming over WiFi to phones and tablets on the boat | Over-the-air (OTA) updates | Marine-grade water-proof enclosure | Integrated with satellite compass | National Marine Electronics Association (NMEA) communication standard interface | Multifunction Display (MFD) | Multi-core CPU driving augmented reality compute stack | ClearCloud service | NVIDIA RTX GPU for real-time computer vision | DockWa app


#SiLC | Machine Vision solutions with FMCW LiDAR vision | FMCW at the 1550nm wavelength | Eyeonic Vision Sensor platform | Detecting vehicles and various obstacles from long distances | Honda Xcelerator Ventures | Honda Marine


#National Technical University of Athens | MariNeXt deep-learning framework detecting and identifying marine pollution | Sentinel-2 imagery | Detecting marine debris and oil spills on sea surface | Automated data collection and analysis across large spatial and temporal scales | Deep learning framework | Data augmentation techniques | Multi-scale convolutional attention network | Marine Debris and Oil Spill (MADOS) dataset | cuDNN-accelerated PyTorch framework | NVIDIA RTX A5000 GPUs | NVIDIA Academic Hardware Grant Program | AI framework produced promising predictive maps | Shortcomings: unbalanced dataset, marine water and oil spills are abundant, foam and natural organic material are less represented


#Yamaha Marine | 450 hp hydrogen-powered V-8 outboard | Three 6-foot-long cylindrical-shaped hydrogen fuel tanks | H2 machine operates by using hydrogen in its combustion chambers | H2 tanks are positioned low and centrally to enhance stability | H2 tanks size demands rethinking of future boat designs, hulls specifically tailored for hydrogen storage | Hydrogen storage system adds considerable weight to vessel | Volumetric energy density of hydrogen is lower, requiring larger tanks | Partners: Roush Performance, Regulator Marine


#Feadship | Hydrogen-cell superyacht | Double-walled cryogenic tank in dedicated room | 4 tons of hydrogen | Cruising protected marine zones.| Cryogenic storage of liquefied hydrogen in superyacht interior | No regulations for hydrogen storage and fuel-cell systems on superyacht


#Intergovernmental Negotiating Committee (INC-5) | Developing international legally binding instrument on plastic pollution | Raising awareness about the serious impacts of plastic pollution on both humans and nature | Global bans and phase-outs of the most harmful and problematic plastic products and chemicals | Global product design requirements to ensure all plastic produced is safe to reuse and recycle as part of global non-toxic circular economy


#Tampere University | Pneumatic touchpad | Soft touchpad sensing force, area and location of contact without electricity | Device utilises pneumatic channels | Can be used in environments such as MRI machines | Soft robots | Rehabilitation aids | Touchpad does not need electricity | It uses pneumatic channels embedded in the device for detection | Made entirely of soft silicone | 32 channels that adapt to touch | Precise enough to recognise handwritten letters | Recognizes multiple simultaneous touches | Ideal for use in devices such as MRI machines | If cancer tumours are found during MRI scan, pneumatic robot can take biopsy while patient is being scanned | Pneumatic device can be used in strong radiation or conditions where even small spark of electricity would cause serious hazard


#BrainChip | Akida Pico | Ultra-low power acceleration co-processor | Enabling development of uber-compact, intelligent devices | Akida2 event-based computing platform | Ultra-low-power (less than a milliwatt) neural processing unit (NPU) | AI accelerator for battery powered, compact intelligent devices (hearing aids, noise-cancelling earbuds, medical equipment) | Event-based co-processor | Intended for voice wake detection, keyword spotting, speech noise reduction, audio enhancement, presence detection, personal voice assistant, automatic doorbell, wearable AI and appliance voice interfaces | Supports power islands for minimal standby power


#Allen Institute for Artifical Intelligence | AI for the Environment | Robot planning precise action points to perform tasks accurately and reliably | Vision Language Model (VLM) controlling robot behavior | Introducing automatic synthetic data generation pipeline | Instruction-tuning VLM to robotic domains and needs | Predicting image keypoint affordances given language instructions | RGB image rendered from procedurally generated 3D scene | Computing spatial relations from camera perspective | Generating affordances by sampling points within object masks and object-surface intersections | Instruction-point pairs fine-tune language model | RoboPoint predicts 2D action points from image and instruction, which are projected into 3D using depth map | Robot navigates to these 3D targets with motion planner | Combining object and space reference data with VQA and object detection data | Leveraging spatial reasoning, object detection, and affordance prediction from diverse sources | Enabling to generalize combinatorially.| Synthetic dataset used to teach RoboPoint relational object reference and free space reference | Red and ground boxes as visual prompts to indicate reference objects | Cyan dots as visualized ground truth | NVIDIA | | Universidad Catolica San Pablo | University of Washington