ISO/IEC JTC 1/SC 29 N 1823
DATE: 1996-11-16
OUR REF.: 29CL348/29F021/29D096/
29N1823c.htm
29N1823x.gif (x:1-3)


ISO/IEC JTC 1/SC 29

Coding of Audio, Picture, Multimedia and Hypermedia Information

Secretariat: Japan (JISC)

DOC. TYPE Meeting Report
TITLE Meeting Report, the Thirty-sixth ISO/IEC JTC 1/SC 29/WG 11 Meeting, 1996-09-30/10-02, Chicago, US [JTC 1/SC 29/WG 11 N 1353]
SOURCE Convener, ISO/IEC JTC 1/SC 29/WG 11
PROJECT NO. --
STATUS Draft meeting report
REFERENCES  SC 29 N 1773 (WG 11 N 1352): Resolutions
ACTION ID. FYI
REQUESTED ACT. For SC 29's information
DUE DATE --
DISTRIBUTION P-, O- and L-members, ISO/IEC JTC 1/SC 29
Officers, ISO/IEC JTC 1/SC 29
Secretariat, ISO/IEC JTC 1
ISO/IEC ITTF
MEDIUM Def/D
NO. OF PAGES N/A



Narumi Hirose, Secretariat, ISO/IEC JTC 1/SC 29
IPSJ/ITSCJ*, Room 308-3, Kikai-Shinko-Kaikan Bldg., 3-5-8, Shiba-Koen, Minato-Ku Tokyo 105 Japan
Telephone: +81-3-3431-2808; Facsimile: +81-3-3431-6493; Telex: 2425340 IPSJ J; E-mail: nhirose@attmail.com
* Information Processing Society of Japan/Information Technology Standards Commission of Japan (A standards organization accredited by JISC)


INTERNATIONAL ORGANISATION FOR STANDARDISATION
ORGANISATION INTERNATIONALE DE NORMALISATION
ISO/IEC JTC1/SC29/WG11
CODING OF MOVING PICTURES AND AUDIO
ISO/IEC JTC1/SC29/WG11N1353
October 1996

Source:Leonardo Chiariglione - Convenor
Title: Report of 36th WG11 meeting
Status: Draft

1. Opening

The 36th WG11 meeting was held in Chicago, IL, US on 96/09/30-10/02 at the kind invitation of ANSI, the American National Standards Body and hosted by Motorola.

2. Roll call of participants

Annex 1 gives the attendance list.

3. Approval of agenda

Annex 2 gives the approved agenda.

4. Allocation of contributions

Annex 3 gives the list of submitted documents.

5. Communications from Convenor

There were no special communications.

6. Report of previous meeting

The Convenor apologised for his inability to provide a report of the previous meeting.

7. Processing of NB Position Papers

These papers, from DE, FR, JP, US, were discussed and a response provided.

8. MPEG Phase 2

8.1 Audio

No activity took place.

8.2 Verification of MPEG-2

8.2.1 Video Quality

No activity took place.

8.2.2 Audio Quality

No activity took place.

8.3 Amendments

8.3.1 Private Data (System #3)

No activity took place.

8.3.2 Multi View Profile (Video #3)

The Disposition of Comments of DAM 3 to ISO/IEC 13818-2 (WG11 N1367) and the text of ISO/IEC AM 3 (WG11 N1366) were approved.

8.4 Part 7 (NBC Audio)

No activity took place.

8.5 Part 10 (DSM-CC Conformance)

No activity took place.

8.6 Workplan

This was approved.

9. MPEG Phase 4

9.1 Requirements

Ver. 1.1 of the MPEG-4 Requirements document (WG11 N1395) and the MPEG-4 profiles document (WG11 N1394) were approved.

9.2 Syntax

This activity took place in the context of the different VMs.

9.3 Tools

9.3.1 Systems

A considerable amount of activity took place in the area of multiplexer.

9.3.2 Natural Audio

A considerable amount of activity took place in the area of synthetic speech.

9.3.3 Synthetic Audio

A considerable amount of activity took place in the area of text-to-speech tools.

9.3.4 Natural Video

A considerable amount of activity took place in the area of video coding tools.

9.3.4 Synthetic Video

A considerable amount of activity took place in the area of face and body animation.

9.4 Verification Models

9.4.1 System

A System VM was not developed yet.

9.4.2 Video

A further version (4.0) of the Video VM was produced (WG11 N1380). The SNHC part is contained in WG11 N1364.

9.4.3 Audio

A further version of the Audio VM was produced (WG11 N1378). The SNHC part is contained in WG11 N1364.

9.5 Tests

A draft document on MPEG-4 test procedures for July 1997 was approved.

9.6 Call for proposals

Fifteen submissions in response to the SNHC Call were received. A new Call for Synthetic Audio was approved (WG11 N1397).

9.7 Simulation software

A Verification Model Development and Core Experiments document (WG11 N1375) extending over previous documents was approved.

9.8 Working Draft

MSDL WD 1.3 was approved.

9.9 Workplan

This was approved.

10. MPEG Phase 8

A disposition of comments to the Man to Multimedia Service Interface NP was produced (WG11 N1400). In response to a NB comment the NP was retitled as "Multimedia Content Description Interface" and nicknamed MPEG-7.

11. Overall WG11 workplan

This was approved.

12. Terms of Reference

This could not be done because of the short duration of the meeting.

13. Liaison matters

Input documents were considered and liaison letters approved.

14. Administrative matters

14.1 Schedule of future MPEG meetings

The April 1997 meeting will be held in Bristol, UK. Further the meeting recognised that the critical phase of development of the MPEG-4 standard required an extra meeting in the January- February time frame. The Convenor was asked to take all appropriate measures to secure an invitation for a meeting.

14.2 Promotion of MPEG

A general document describing MPEG-4 and a press release were approved.

15. Organisation of this meeting

15.1 Tasks for subgroups

Requirements
System
Video
Audio
SNHC
Test
Implementation Studies
Liaison

15.2 Finalisation of meeting allocation

The following joint meetings were held:
SNHC-MSDL Mon 11:00-13:00
Video-ISG Tue 09:00-10:00
Video (MVP)-Test Mon 11:00-13:00
Video-Test-Req-MSDL-SNHC-ISG Tue 10:00- 11:00
Test-Audio Tue 12:00-13:00
MSDL-Req Tue 09:00-10:00
Video-Req Wed 09:00-09:30

16. Planning of future activities

The following ad-hoc groups were established:
1372 Adhoc Group MPEG-4 July 1997 Audio/Visual Tests
1390 Adhoc Group on Coding Efficiency
1371 Adhoc Group on Computational Graceful Degradation
1377 Adhoc Group on Core Experiments for MPEG-4 Audio
1389 Adhoc Group on Core Experiments on Multifunctional Coding
1407 Adhoc Group on Definition and Measurement of Statisitcal Performance Parameters of MPEG-4 Video VM
1392 Adhoc Group on Editing MPEG-4 Video VM Document
1363 Adhoc Group on Editing of SNHC VM Specifications
1406 Adhoc Group on Editing Video WD
1388 Adhoc Group on Error Resilience
1360 Adhoc Group on Face and Body Animation
1370 Adhoc Group on Investigating reduced complexity padding for the Video VM.
1409 Adhoc Group on Joint Video-SNHC Technical Issues
1361 Adhoc Group on Media Integration of Text and Graphics
1376 Adhoc Group on MPEG-4 Audio WD Editing and VM Software Implementation
1393 Adhoc Group on MPEG-4 Requirements
1405 Adhoc Group on MSDL Architecture Evolution
1403 Adhoc Group on MSDL Verification Model
1402 Adhoc Group on MSDL Working Draft editing
1404 Adhoc Group on Multiplex Specification and Signaling
1391 Adhoc Group on Region Oriented Texture Coding
1387 Adhoc Group on Shape and Alpha Coding
1362 Adhoc Group on SNHC
1359 Adhoc Group on SNHC Audio
1408 Adhoc Group on Video Low Delay Evaluations

Details of each ad-hoc group can be found in WG11 N1381.

17. Resolutions of this meeting

These were approved (WG11 N1352)

18. A.O.B

There were no other businesses.

19. Closing

The meeting closed on 1996/10/02 22:00.


Annex 1
Attendance list

Mr. Laervit

David Shu

Michael Frater University of New South WalesAU
Marc Van DroogenbroeckBelgacomBE
Gauthier LafruitIMECBE
Thierry DelmotU.C.L. Telecommunication LaboratoryBE
Joan-Maria Mas RibesU.C.L. TelecommunicationLaboratoryBE
Faouzi KossentiniUniversity of British ColumbiaCA
Frank BossenEPFLCH
Tolga K. CapinEPFLCH
Touradj EbrahimiEPFLCH
Marco MattavelliEPFLCH
Igor BandzicUniversity of GenevaCH
Andreas GraffundesDE
Klaus DiepoldBeta Technik GmbHDE
Angelika KnollDeutsche Telekom AGDE
Peter ListDeutsche Telekom AGDE
Carsten HerpelDeutsche Thomson-Brandt GmbHDE
Jens SpilleDeutsche Thomson-Brandt GmbHDE
Karl Heinz BrandenburgFhGDE
Thorsten SelingerHeinrich-Hertz-Institut BerlinDE
Jan De LameillieureHeinrich-Hertz-Institut BerlinDE
Gerald KnabeQ-TeamDE
Sven BauerRobert Bosch GmbHDE
Christian KoechlingRobert Bosch GmbHDE
Gunnar NitscheRobert Bosch GmbHDE
Andre KaupSiemens AGDE
Andreas HutterTU MuenchenDE
Peter KuhnTU MuenchenDE
Heiko PurnhagenUniversitaet HannoverDE
Bermhard GrillUniversity of ErlangenDE
Thomas WiegandUniversity of Erlangen-NurembergDE
Bernd EdlerUniversity of HannoverDE
Achim FreimannUniversity of HannoverDE
Peter GerkenUniversity of HannoverDE
Miguel RoserTelefonica Investigacion y DesarrolloES
Francisco MoranUniversidad Politecnica de MadridES
Mauri VaananenNokiaFI
Marcin RzeszutkoNokia Research CenterFI
Isabelle AmonouCanon Research CenterFR
Jean-Claude DufourdFrance Telecom, ENSTFR
Olivier AvaroFrance Telecom, CNETFR
Gerard EudeFrance Telecom, CNETFR
Henri SansonFrance Telecom, CCETTFR
Julien SignesFrance Telecom, CCETTFR
Gilles PrivatFrance Telecom, CNET-GrenobleFR
Lionel BouchardLaboratoires d'Electronique Philips (LEP)FR
Isabelle CorsetLaboratoires d'Electronique Philips (LEP)FR
David MolterLaboratoires d'Electronique Philips (LEP)FR
Noel BradyTeltec IrelandIE
Liam WardTeltec IrelandIE
Dan TamirMotorolaIL
Leonardo ChiariglioneCSELTIT
Laura ContinCSELTIT
Roberto PockajDIST University of GenovaIT
Tsuyoshi KasaharaCasio Computer Co., Ltd.JP
Eishi MorimatsuFujitsu Laboratories Ltd.JP
Itaru KanekoGraphics Communication LaboratoriesJP
Akira DateHitachi, Ltd.JP
Yuichiro NakayaHitachi, Ltd.JP
Satoshi KatsunoKokusai Denshin Denwa Co., Ltd.JP
Naoya TanakaMatsushita Communication Ind. Co., Ltd.JP
Koji ImuraMatsushita Communication Ind. Co., Ltd.JP
Yutaka MachidaMatsushita Communication Ind. Co., Ltd.JP
Minoru EtohMatsushita Electric Ind. Co., Ltd.JP
Shinya KadonoMatsushita Electric Ind. Co., Ltd.JP
C. S. BoonMatsushita Electric Industrial Co. Ltd.JP
Maki OkunoMatsushita Electric Industrial Co. Ltd.JP
Kohtaro AsaiMitsubishi Electric Corp.JP
Tokumichi MurakamiMitsubishi Electric Corp.JP
Takahiro FukuharaMitsubishi Electric Corp.JP
Yoshihiro MiyamotoNEC CorporationJP
Jiro KattoNEC CorporationJP
Hitoshi KoyamaNEC CorporationJP
Toshiyuki NomuraNEC CorporationJP
Hiroyuki ImaizumiNHKJP
Shinichi SakaidaNHKJP
Kinya OosaNippon Steel CorporationJP
Hirohisa JozawaNTTJP
Naoki IwakamiNTTJP
Takehiro MoriyaNTTJP
Sanae HotaniNTT DoCoMoJP
Toshio MikiNTT DoCoMoJP
Toshifumi KanamaruOki Electric Industry Co., Ltd.JP
Zhixiong WuOki Electric Industry Co., Ltd.JP
Shigeru FukunagaOki Electric Industry Co.,Ltd.JP
Neil DayRicoh Co. Ltd.JP
Keiichi HibiSharp CorporationJP
Hiroyuki KatataSharp CorporationJP
Kazuyuki IijimaSony Corp.JP
Jun MatsumotoSony Corp.JP
Masayuki NishiguchiSony Corp.JP
OgataSony Corp.JP
Teruhiko SuzukiSony Corp.JP
Yoichi YagasakiSony Corp.JP
Takashi KoikeSony CorporationJP
Kenzo AkagiriSony CorporationJP
Yuji ItohTexas Instruments Tsukuba R&D Center Ltd.JP
Yoshihiro KikuchiToshibaJP
Toshiaki WatanabeToshibaJP
Shigenobu MinamiToshiba Corp.JP
Tsuneo NittaToshiba Corp.JP
Akihisa KodateWaseda UniversityJP
Somkiat WangripitakWaseda UniversityJP
Jong-Il Jin
KR
Jung Chul Lee
KR
Jong-il KimDaewooKR
Jinhun KimDaewooKR
Young-Kwon LimETRIKR
Sung Moon ChunHyundai Electronics Industries Co.KR
Joo Hee MoonHyundai Electronics Industries Co.KR
Gwang Hoon ParkHyundai Electronics Industries Co.KR
Jae-won ChungKAISTKR
Yung-Lyul LeeKAISTKR
Sang-hee LeeKAISTKR
Shi-Hwa LeeSamsungKR
Euee-Seon JangSamsung AITKR
Sang-Wook KimSamsung AITKR
Jae-Sebo ShinSamsung AITKR
Rob KoenenKPN ResearchNL
Werner OomenPhilips ResearchNL
Rob BeukerPhilips Research LaboratoriesNL
Andrew PerkisNorwegian Telecommunications AuthorityNO
Aasmund SandvandTelenor R&DNO
Paulo NunesInstituto de TelecomunicacoesPT
Fernando PereiraInstituto Superior TecnicoPT
Harald BrusewitzEricsson Radio Systems ABSE
Bo BurmanEricsson Radio Systems ABSE
Morgan LindqvistEricsson Radio Systems ABSE
Goran RothEricsson Radio Systems ABSE
Torbjorn EinarssonEricsson Telecom ABSE
Per ThorellEricsson Telecom ABSE
Ola AnderssonTelia Research ABSE
Olle FranceschiTeracomSE
Marta KarczewiczNokia Research CenterSF
Tan Thiow KengPanasonic Singapore Laboratories Pte Ltd.SG
Ah-Peng TanPanasonic Singapore Laboratories Pte Ltd.SG
Thiow-Keng TanPanasonic Singapore Laboratories Pte Ltd.SG
Patrick MulroyBT Laboratories, IpswichUK
Paola HobsonMotorola Ltd.UK
Paul FellowsSGS-THOMSONUK
John ArnoldThe University of New South WalesUK
Su Loe Young
US
Hsi-Jung WuApple Computer, Inc.US
Tsuhan ChenAT&TUS
Barry HaskellAT&TUS
Jim JohnstonAT&TUS
Joern OstermannAT&TUS
Atul PuriAT&TUS
Peter KroonBell Laboratories, Lucent TechhologiesUS
Alexandros EleftheriadisColumbia UniversityUS
Yihan FangColumbia UniversityUS
Sorin C. CismasCompCore Multimedia, Inc.US
Richard SchaphorstDelta Information SystemsUS
Gary DemosDemoGraFXUS
Frederic DufauxDigital Equipment Corp.US
Gerry SegalDigital Media InteractiveUS
Marina BosiDolby LaboratoriesUS
Robert SennEastman Kodak CompanyUS
Rajan Laxman JoshiEastman Kodak CompanyUS
Richard IvyESPUS
Mike ColemanFive Bats ResearchUS
Jason YaoFujitsu Laboratories of AmericaUS
Ajay LuthraGeneral Instrument Corp.US
Sam NarasimhanGeneral Instrument Corp.US
Xuemin ChenGeneral Instrument Corp.US
Ganesh RajanGeneral Instrument Corp.US
Alexander DrukarevHewlett Packard LabsUS
Ronnie BurnsHughes ElectronicsUS
Donald MeadHughes ElectronicsUS
Ram NagarajanHughes ElectronicsUS
Chris HansenIntel CorporationUS
Thomas GardosIntel CorporationUS
Roger Chuang Iterated Systems Inc.US
John MullerIterated Systems, Inc.US
Michael ZeugIterated Systems, Inc.US
Weiping LiLehigh UniversityUS
Kwok ChauLSI LogicUS
Caspar HorneMediamatics, Inc.US
Ming-Chieh LeeMicrosoft CorporationUS
Wei-ge ChenMicrosoft Corporation, USAUS
Huifang SunMitsubishiUS
David ThomMitsubishi Electronic America Inc.US
Bob BellMitsubishi Electronics America, Inc.US
Glen YoungMitsubishi Electronics America, Inc.US
Cheung AuyeungMotorolaUS
Mark BanhamMotorolaUS
Jim BraileanMotorolaUS
Kevin O'ConnellMotorolaUS
Davis PanMotorolaUS
Otto SchnurrMotorolaUS
Kiran ChallapaliPhilips ResearchUS
Dave LindberghPicturetelUS
Gary SullivanPicturetelUS
Donald PianQUALCOMM IncorporatedUS
Homer ChenRockwellUS
Janice ShenRockwellUS
Stephane BarbuRockwell Semiconductor SystemsUS
James ThiRockwell Semiconductor SystemsUS
Si Jun HuangScientific-Atlanta Inc.US
Dean MessingSharp Laboratories of AmericaUS
Regis CrinonSharp Laboratories of America Inc.US
Shawmin LeiSharp Laboratories of America Inc.US
Ibrahim SezanSharp Laboratories of America Inc.US
Richard QianSharp Labs of AmericaUS
Chuck LueckTexas InstrumentsUS
Iole MoccagattaTexas InstrumentsUS
Raj TalluriTexas InstrumentsUS
John VillasenorUCLAUS
Vladimir CupermanUniversity of California (SB)US
Osama AlshaykhUniversity of California, BerkeleyUS
Avideh ZakhorUniversity of California, BerkeleyUS
Hai TaoUniversity of Illinois at Urbana-ChampaignUS
Pei-Hwa HoUniversity of PennsylvaniaUS
A. M. TekalpUniversity of RochesterUS
P. J. L. Van BeekUniversity of RochesterUS
Philip ChouXerox PARCUS



Annex 2
Agenda

1. Opening
2. Roll call of participants
3. Approval of agenda
4. Allocation of contributions
5. Communications from Convenor
6. Report of previous meeting
7. Processing of NB Position Papers
8. MPEG Phase 2
8.1 Audio
8.2 Verification of MPEG-2
8.2.1 Video Quality
8.2.2 Audio Quality
8.3 Amendments
8.3.1 Private Data (System #3)
8.3.2 Multi View Profile (Video #3)
8.4 Part 7 (NBC Audio)
8.5 Part 10 (DSM-CC Conformance)
8.6 Workplan
9. MPEG Phase 4
9.1 Requirements
9.2 Syntax
9.3 Tools
9.3.1 Systems
9.3.2 Natural Audio
9.3.3 Synthetic Audio
9.3.4 Natural Video
9.3.4 Synthetic Video
9.4 Verification Models
9.4.1 System
9.4.2 Video
9.4.3 Audio
9.5 Tests
9.6 Call for proposals
9.7 Simulation software
9.8 Working Draft
9.9 Workplan
10. MPEG Phase 8
11. Overall WG11 workplan
12. Terms of Reference
13. Liaison matters
14. Administrative matters
14.1 Schedule of future MPEG meetings
14.2 Promotion of MPEG
15. Organisation of this meeting
15.1 Tasks for subgroups
15.2 Finalisation of meeting allocation
16. Planning of future activities
17. Resolutions of this meeting
18. A.O.B
19. Closing

Annex3
Document Register

Source: Pete Schirling

No.SourceTitle
1161Pete SchirlingDocument Register for 36th Meeting in Chicago
1162Ming-Chieh Lee, Wei-ge Chen, Bruce Lin, CHuang GuMicrosoft Software C++ Implementation of MPEG-4 Video VM3.x
1163Antonio CarvalhoMSDL - Multiplexing & System tools
1164ISO/IEC ITTF via SC 29 SecretariatSummary of Voting, ISO/IEC 13818-2/DAM 3 (SC 29 N 1554)
1165Fabio Lavagetto, Igor Pandzic, Roberto Pockaj, Marc Escher, Tolga CapinVIDAS submission to SNHC CfP on facial animation
1166Bernhard GrillUse of alternative frame lengths for MPEG-2 NBC Audio Coding in MPEG-4
1167Ion-Paul Beldie, Kambiz FazelHHI subjective test results of MPEG-2 Multi-View Profile
1168Hiroyuki Imaizumi, Ryoichi Yajima, Eisuke NakasuSubjective Test Results of MPEG-2 Multi-View Profile at NHK
1169Hiroyuki Imaizumi, Ajay LuthraReport of the AHG on Subjective Testing of MPEG-2 Multi-View Profile
1170Norman I. Badler, Pei-Hwa HoProposal for Human Body Animation
1171Richard SchaphorstStandardized Text and Graphics Overlay for Video Media
1172Richard SchaphorstLiaison to MPEG4 re Cooperation with LBC
1173Richard SchaphorstComments on MSDL-M Specification
1174Ram Nagarajan, Ron Burns, Pete Doenges, Tsuhan Chen, Baldine PaulAddendum to SNHC Application Objectives and Requirements
1175Peter H. Au, David ShuResults of Core Experiment S2
1176Ram Nagarajan, Peter AuProposal for a Core Experiment on Grayscale Shape Coding Techniques
1177Peter H. Au, David ShuProposal for a Core Experiment on Low Latency Transmission of Sprite Objects
1178T.K. Tan, S.M. ShenResults for Core Experiment T9 - DC/AC Prediction
1179Toshio Miki,Toshiro KawaharaError patterns for Error resilience core experiments
1180Secretariat, ISO/IEC JTC 1/SC 29Set of Documents regarding NP on Man multimedia service interface
1181G. Russo, S. ColonneseDevelopment of core experiment N2 on automatic segmentation techniques: FUB results.
1182laura continTest methods and procedures for July 97 MPEG-4 tests
1183laura continReport of the ad hoc group on MPEG-4 July 97 Audio Visual Test.
1184Anthony Vetro, Huifang Sun, Jay BaoCore Experiment on Q2 Rate Control
1185Wa James Tam, Lew B. StelmachCRC Subjective Test Results of MPEG-2 Multi-View Profile
1186Regis J. Crinon, Ibrahim SezanUnified Syntax for Static and Dynamic Sprite-based Coding
1187Richard J. Qian, M. Ibrahim SezanVertex-Based Hierarchical Shape Representation and Coding
1188P. Gerken, R. MechAutomatic Segmentation of Moving Objects (Partial Results of Core Experiment N2)
1189Marta KarczewiczDescription of Core Experiment P9
1190CHAUT Pierre-Emmanuel, VIAUD Marie-Luce, SAULNIER AgnesAnalysis/Synthesis system for facial animation
1191Ulrich BenzlerResults of Core Experiment P8 (Motion and Aliasing compensating Prediction)
1192Marco MattavelliReport of the Ad-hoc group on computational graceful degradation
1193Marco Mattavelli, Sylvain BrunettonMeasures of the range of computational based scalability
1194Joern OstermannReport of the AdHoc Group on Core Experiments on Object- or Region-Oriented Texture Coding in MPEG-4 Video
1195Joern OstermannReport of the AdHoc Group on Core Experiments on MPEG-4 Video Shape Coding
1196Joern OstermannReport of the AdHoc Group on SNHC Proposals for Testing
1197Joern OstermannAn Interface for the Animation of Human Heads from Text
1198Richard Ivy, Michael ZeugReport of Ad Hoc Group on MPEG4 Low Delay Evaluations
1199Keith Kenemer, Dmitriy Korchev, Michael ZeugComplexity Analysis of the Decoder used in the P5 Core Experiment
1200T.K. Tan, S.M. ShenResults of 2D and 3D Intra VLC Tables under Core Experiment T9 - DC/AC Prediction
1201Michael R. Frater, John F. Arnold12 Bit Video for MPEG 4
1202Naoki Iwakami, Takehiro Moriya, Kazunaga Ikeda, Satoshi Miki, Akio JinTechnical description of VM T/F coder based on LPC and VQ
1203Michael R. Frater, John F. ArnoldFixed Interval Resynchronisation Applied to MPEG 2
1204Angelika KnollInteroperability of a MPEG-4 terminal
1205Koji Imura, Yutaka MachidaThe Results of the Core Experiment on Error Resilience(E1)
1206Hisashi Saiga, Shuichi Watanabe, Hiroyuki Katata, Hiroshi KusaoExperiments on Context-based Arithmatic Encoding
1207The Natinal Body of JapanOn Intra Frame Coding for Still Image Compression
1208Peter ListProposal for new Core Experiment: "Unification of B- and PB-prediction"
1209Tolga K. CapinReport of the AHG on Human Body Representation
1210Shinichi Sakaida, Wentao Zheng, Yutaka Kaneko, Yoshiaki ShishikuiRegion Support DCT (RS-DCT) for coding of region texture
1211Ronan Boulic, Tom Molet, Tolga Capin, Igor Pandzic, Nadia Magnenat Thalmann, Daniel ThalmannEPFL/University of Geneva: Body Representation Proposal
1212Zhixiong WU, Toshifumi KANAMARUResults on Core Experiment T13 : Block Based DCT and Wavelet Selective Coding
1213Touradj Ebrahimi (editor)MPEG-4 Video Verification Model Version 3.2
1214Zhixiong WU, Toshifumi KANAMARUSA-DCT and SA-Wavelet Selective Coding for Arbitrary Shaped Image
1215Touradj EbrahimiReport of Ad Hoc Group on MPEG- 4 Video VM Editing
1216Zhixiong WU, Toshifumi KANAMARUSA-DCT and SA-Wavelet Selective Coding for Arbitrary Shaped Image
1217Zhixiong WU, Toshifumi KANAMARUSA-DCT and SA-Wavelet Selective Coding for Arbitrary Shaped Image
1218Keiichi HIBI, Nobuyuki EMAResults of efficient coding core experiments P6
1219Keiichi HIBI,Tadashi UCHIUMI, Seiji SATOProposal of new error resilience core experiment by scalable coding
1220Keiichi HIBI, Tadashi UCHIUMI, Seiji SATOWavelet Transform Video Coding for Error Resilience and Low Delay
1221Yoichi Yagasaki, Kazuhisa HosakaThe results of CE S4g:scalable shape coding method
1222Yoichi Yagasaki, Kazuhisa HosakaNew proposal for binary shape coding
1223Teruhiko Suzuki, Yoichi YagasakiThe results of CE B1.1 and optimization of spatial scalability
1224Teruhiko Suzuki, Yoichi YagasakiThe syntax issues for B-VOP and scalability
1225Andre Kaup, Anke LorenzResults of Core Experiment O9
1226Fernando PereiraWhat about profiles in MPEG- 4
1227Richard IvyLatency Issues in the MPEG4 VM and Proposal Evaluation
1228Masayuki Nishiguchi, Kazuyuki Iijima, Jun MatsumotoA report on the HVXC parametric core for MPEG-4 Audio VM
1229The National Body of JapanComments on Resolution 18 of the 35th ISO/IEC JTC1/SC29/WG11 meeting
1230Simon A. J. WinderClarification of down sampling technique for deriving the chrominance alpha plane
1231Harald BrusewitzResults with error resilient FLC
1232Harald BrusewitzResults with GOB synchronisation
1233Harald BrusewitzResults with error resilient tools
1234Harald BrusewitzNew Core Experiment with robust VLC
1235Harald BrusewitzResults from Core Experiment P10
1236Frank BossenGeometry Compression
1237Frank Bossen, Noel BradyResults of S4 Context-based Arithmetic Coding CE
1238Gerry SegalNavigable Video and Use of A Novel MPEG Encoding
1239Hai Tao, Tom Huang, Homer Chen, Tsae-Pyng Janice Shen, Aruna BayyaTechnical Description of UIUC/Rockwell MPEG-4 SNHC Proposal
1240Hai Tao, Tom Huang, Homer Chen, Tsae-Pyng Janice Shen, Aruna BayyaTechnical description of UIUC/Rockwell MPEG-4 SNHC proposal
1241Toshiaki Watanabe, Yoshihiro KikuchiComparison of Error Resilience Core Experiments E1
1242Toshiaki Watanabe, Yoshihiro KikuchiComparison of Binary Shape Coding (Core Experiment S4)
1243Toshiaki Watanabe, Yoshihiro KikuchiComparison of Texture Coding (Core Experiment O2)
1244Jung-Chul Lee, Sang-Hoon KimMultilevel Scalable TTS Synthesis
1245Eric PetajanVideo-driven Face Animation
1246Euee S. Jang, Se-Hoon Son, Yang-Seock SeoResults of CE Q3(Analysis of arithmetic coding for the MPEG-4 video VM)
1247Euee S. Jang, Dae-Sung Cho, Yang-Seock SeoResults of CE T9/T10(DC/AC prediction technique)
1248Sung-Gul Ryoo, Jae-Seob Shin, Yang-Seock SeoResults of CE Q2(Improved rate control)
1249Shi-Hwa Lee, Yu-Shin Cho, Jae-Seob Shin, Yang-Seock SeoResults of CE S4(Comparison of shape coding techniques)
1250Seong-Jin Kim, Jae-Seob Shin, Euee S. Jang, Yang-Seock SeoResults of CE T12(MC error suppression technique)
1251Takahiro Fukuhara, Kohtaro Asai, Shun-ichi Sekiguchi, Tokumichi MurakamiResults of Core Experiments P2 & P3 with B-VOP and PG- VOP
1252Shun-ichi Sekiguchi, Kohtaro Asai, Takahiro Fukuhara, Tokumichi MurakamiResults of Core Experiments P11
1253C. S. Boon, T. TakahashiSome Consideration on Elementary Stream Management and Composition Information
1254C. S. Boon, S. KadonoInter Binary Shape Coding with Overlapped Motion Compensation and its Problems
1255C. S. Boon, J. Takahashi, S. KadonoExperiment Results of Core Experiment O2
1256C. S. Boon, J. Takahashi, S. KadonoExperiment Results of Core Experiment O4 and O9
1257Peter KuhnComplexity Analysis of the MPEG-4 Video Verification Model Decoder
1258Peter KuhnAHG Report on Video Verification Model Complexity Assessment
1259Yutaka Machida, Koji ImuraFurther Improvement of Slice-based Error Detection
1260Wei Wu, Homer Chen, Jim SchollResults of Core Experiments T5/T6 on Vector Wavelet Coding
1261Yihan Fang, Alexandros EleftheriadisThe Architectural Framework of MSDL-S
1262Yihan Fang, Alexandros EleftheriadisA Proposed Revision of MSDL-S
1263James Brailean, Mark BanhamResults for Core Experiment E1: Resynchronization Techniques
1264James BraileanReport of ad-hoc group on error resilience aspects
1265Tsuhan ChenChroma-Keying for Coding of Regions (Core Experiment O6)
1266Tsuhan Chen, Ram R. RaoIssues Concerning Audio- Visual Scalability
1267Weiping Li, S.Li,W.Li,F.Ling,H.Sun,J.P.WusA Proposal for New Syntax Elements in MPEG-4 Video VM
1268Weiping Li, S.Li,W.Li,F.Ling,H.Sun,J.P.WusReport on Core Experiments T5, T6, T11, and O3/O11
1269Shinya Kadono, C.S. BoonResult on Shape Core Experiment S4d(Improved MMR Method)
1270Tsuyoshi KasaharaResults of Core Experiment P2&P3
1271Sanae Hotani, Toshio MikiConsideration on Core Experiments of Error Resilience aspects in MPEG-4 Audio
1272John MullerReport of the ad hoc group on core experiments on efficient coding in MPEG-4 video
1273Masami Ogata, Nobuyoshi MiyaharaProposal of new schemes for wavelet coding of inter frame
1274Jiro Katto, Yoshihiro Miyamoto, Akio Yamada, Mutsumi OhtaProposals for Media Integration in MPEG4/SNHC
1275Jiro Katto, Peter K. DoengesReport of the AdHoc Group on SNHC API
1276Taisuke MATSUMOTOMMR Shape Coding Results with the Arithmetic Coding Method
1277Michael Frater, John Arnold, Martin KuchlmayrResults of Error Resilience Core Experiment E1
1278Hirohisa Jozawa, Kazuto KamikuraResults of Core Experiment P1
1279G. H. Park, S. M. Chun, J. H. Moon, C. S. ParkCore Experiment Results of S6: Shape Adaptive Region Partitioning Method
1280Corinne Le Buhan, Touradj EbrahimiStudy of the effect of lossy shape coding on motion/texture coding and reconstructed VOP quality evaluation
1281M. SchumannInformation about DSM-CC Conformance Testing activities (DAVIC)
1282Toshio Miki, Satoru Adachi, Tomoyuki OhyaResults of Core Experiments on Error Resilience - E1
1283Minoru EtohResult of Core Experiment S2 and Proposal for Sprite-based Coding
1284Jae Gark Choi,, Young-Kwon Lim,, Myoung Ho Lee,, Gun Bang,, Jisang YooAutomatic segmentation based on spatio-temporal information (ETRI description of Core Experiment N2)
1285Shinya Nakajima, Masanobu AbeInterface for Hybrid Text-to-Speech Syntehsis
1286Akio Yamada, Yoshihiro MiyamotoResults of Core Experiment P1 (Global Motion Compensation)
1287Yoshihiro MiyamotoResults of Core Experiment P6 (2D triangle mesh based MC)
1288Gilles PRIVAT, Ivan LE HINHardware evaluation of shape decoding APIs
1289Gilles PRIVAT, Marc BRELOTAn implementation of compositing API for 2.5D image representation
1290Christian Koechling, Gunnar Nitsche Java Encapsulation of G723 Speech Decoder
1291Thomas Wiegand, Markus FlierlResults of Core Experiment P5 (Entropy-Constrained Variable Block Size Coding)
1292J. De LameillieureResults of the core experiment on SA-DCT (O4)
1293J. De LameillieureOn field pictures in temporal scalability in the Multi-View Profile
1294Andreas Hutter, Peter Kuhn, Stephan Herrmann, Erich HaratschResults of Core Experiment P5
1295Gunnar Nitsche, Peter VogelChanges in H.223 Annex A
1296Bernd EdlerReport of the Ad-hoc Group on MPEG-4 Audio VM
1297Hirohisa Jozawa, Yoshinori Suzuki, Yuichiro Nakaya, Takahiro Fukuhara, Akio Yamada, Minoru EtohProposal of Integrated Bitstream Syntax for P1, P3, and S2
1298Bernd EdlerUpdated MPEG-4 Audio VM Description
1299S. R. QuackenbushSubmission of MPEG-2 NBC Decoder to MPEG-4 Audio VM
1300Selinger, Marquardt, StabernackIntroduction of a tool- based and object-oriented view of MSDL-M
1301Rob KoenenReport of AHG on MPEG-4 Requirements
1302Jae-Seob ShinReport on CE S6 (SARP method)
1303Y. Suzuki, Y. Nakaya, S. Misaka, A. DateResults of core experiment P1 (global motion compensation)
1304Y. Nakaya, Y. Suzuki, A. Date, S. MisakaResults of core experiment P6 (2D triangle mesh-based MC prediction)
1305MoMuSys' partnersMoMuSys C implementation of MPEG-4 video VM 3.1
1306Olivier AvaroReport of the AHG on MSDL Working Draft Editing
1307Karlheinz BrandenburgReport of the Ad Hoc Group on MPEG-4 Audio Core Experiments
1308Karlheinz BrandenburgReport of the Ad Hoc Group to refine Resolution 18 of the Tampere Meeting
1309Otto Schnurr, Davis PanA Request for the Evaluation of a Possible MPEG-4 Audio Core Experiment
1310James Irwin, Ralf Schaefer, Paul FellowsMPEG-4 architectural issues
1311Signes Julien, Joern OstermannReport of the Adhoc group 1332 on MSDL video decoding API
1312Sang-hee Lee, Jae-kyoon Kim, Joo-hee MoonSome Results and New Trials on Core Experiment T9/T10 - DC/AC Prediction
1313Bob SennDraft Requirements Profile for Content- based Storage and Retrieval
1314Jae-won Chung, Jae-kyoon Kim, Joo-hee MoonProposal for efficient S4a and S4h - and some results
1315Carsten HERPELReport of ad-hoc group on MSDL Multiplex Verification Model
1316Carsten HERPELSignalling requirements in MPEG- 4
1317Carsten HERPELApplication of error resilience tools available in the MPEG-4 multiplex
1318Carsten HERPELExtension of the multiplex table definition
1319A. PuriReport of Ad hoc Group on Multifunctional Coding in MPEG-4 Video
1320A. Puri, R. L. Schmidt, B. G. HaskellDescription and Results of Coding Efficiency experiment T9
1321R. L. Schmidt, A. Puri, Weiping Li, B.G. HaskellDescription and Results of Coding Efficiency Experiment T11
1322A. Puri, R. L. Schmidt, B. G. HaskellDescription of Coding Efficiency experiment T4
1323Gary J. SullivanProgress and Plan for "H.263+" Standardization
1324Antony Crossman, Gary J. SullivanSpeech/Audio Coding for Multimedia Terminals and Multi-Networks
1325Antony Crossman, Gary J. SullivanOverview of the PictureTel Transform Codec (PTC)
1326Ferran Marques , Paulo NunesDescription of Core Experiment S4b: Multi-Grid Chain Code Method
1327Paulo Nunes, Ferran MarquesResults for the Core Experiment on Multi-Grid Chain Code
1328A. M. Tekalp, P. J. L. van Beek, C. TokluTracking and Functionality Demonstration
1329A.M. Tekalp, P.J.L. van BeekCore experiment M2: Updated description
1330Ganesh Rajan , Peter Doenges Report of the Ad-Hoc Group on Synthetic/Natural Hybrid Coding
1331P. Gerken, H. LiComparison of shape coding techniques (Partial results of core experiment S4)
1332Frederic DufauxUpdated Results for the Core Experiment N3
1333Iraj Sodagar, Stephen MartucciStatus Report of Core Experiment T1/T2: Wavelet Coding of I and P Pictures
1334Iraj Sodagar, Hung-Ju Lee, Stephen MartucciStatus Report of Core Experiment T5/T6: Vector Wavelet Video Coding
1335Tihao Chiang, Iraj Sodagar, Stephen Martucci, Ya-Qin ZhangStatus Report of Core Experiment Q2: Improved Rate Control
1336Cheung AuyeungResults of Adaptive 3D VLC for Intra-coded Pictures
1337Cheung AuyeungStatistic AHG Report
1338Ganesh Rajan , Peter Doenges Report of the Ad-Hoc Group on Synthetic/Natural Hybrid Coding
1339Jongil Kim, Sang Hoon Lee, and Kyuhwan ChangResults of core experiments O4, O8 and O9
1340Jongil Kim, Jinhun Kim, Kyuhwan ChangResults of Core experiment S4
1341Kevin O'ConnellGeometric Representation of Shapes - S4a Simulation Results
1342Iole Moccagatta, Yashoda NagProposal for MPEG-4 Video VM Statistics File
1343Iole Moccagatta, Raj TalluriProposal for Separate Motion/Texture Syntax for I-, P-, and B-VOPs in the MPEG-4 Video VM
1344Tom Bannon, Iole Moccagatta, Yashoda Nag, Raj TalluriCompression Efficiency Performance of the MPEG-4 Video VM
1345Eric Petajan, Fabio LavagettoFace File Format
1346Eric PetajanFace Model Parameter Suggestions
1347Ming-Chieh Lee, Wei-ge ChenProposal for Efficient Transparent Block Skipping
1348Ming-Chieh Lee, Chuang Gu, Wei-ge ChenResults Report on S2 -- Sprite Warping
1349Chuang Gu, Ming-Chieh LeeResults Report on O2 -- Block Padding for Motion Compensation
1350Chuang Gu, Ming-Chieh LeeResults Report on N3 -- Sprite Generation
1351Ola Andersson, Marie WilhelmssonResults from core experiment P10
1352Rob A. BeukerResults of core experiment T8 Variable- size Lapped Transform coding
1353Mihran TuceryanFace model specification requirements for real time model based video communication
1354US National BodyUSNB -Intra-frame Only Profile
1355US National BodyUSNB -Joint Development
1356US National BodyUSNB -Statement Regarding Resolution 18
1357US National BodyUSNB -Requirements
1358US National BodyUSNB -Audio
1359US National BodyUSNB -Use of Available Standards
1360US National BodyUSNB -MPEG-2 Systems Request
1361Girard EUDE, Pierre-Reni ROGEL, Dominique NASSE, Olivier AVARONetwork transport requirements for MSDL-M
1362Joerg-Martin Mueller, Bernhard Grill, Luca CellarioUse of complete speech coding algorithms in the MPEG-4 Audio- VM
1363Marta KarczewiczReport on Core Experiment P9
1364Shinya Kadono, Takahiro Nishi, C.S.BoonPreliminary Results on Combination of Shape Core Experiment S4d and S4f
1365Ralph Neff, Emin Martinian, Eugene Miloslavsky, Avideh ZakhorExperiment T3: Matching Pursuit Coding of Prediction Errors
1366AFNOR FRENCH NBFrench NB contrib on works identification
1367Kiran Challapali, Richard ChenResults of core experiment: "Chroma keying for coding region textures" (O-6)
1368Shinya NakajimaChairman's report on the SNHC/Audio AHG meeting



Annex 4
Requirements Meeting Report

Source: Rob Koenen, Chairman MPEG


Introduction
The Requirements Group had a useful meeting in Chicago, during which many questions were addressed. Not all were answered yet, but good progress was made. It was also good to see the attendance doubling from the last meeting, although the attendance level of the Video Group was not quite matched yet.
The following issues were discussed:
General Requirements Issues
A new version of the Requirements Document was issued (WG11 N1395, MPEG-4 Requirements version 1.1), and it was decided to create a separate Profiles Document (N1394, MPEG-4 Profiles version 1.1). To keep the Profiles and Requirements documents synchronised, the numbering of the Profiles Document starts with version 1.1. In addition to that there were minor revisions to general requirements; these are marked with revision marks, and so easy to spot.
In a joint meeting with the Systems Group, priorities were set for interworking and compatibility: Discussion about Profiles (@ flex_0)
A discussion about how profiles should be organised at flex_0 resulted in the following conclusions: Latency Issues
The Requirements Group recommended after a brief discussion that hat the Video, Audio, Systems and SNHC groups address latency issues in their core experiments and proposal evaluations.


The Profiles Document
The Real-Time Communications Profile and Object based Storage and Retrieval Profile were discussed. Also, there might be a need for a multimedia broadcast profile. The interested parties were invited to bring requirements to the next meeting. It was noted that people interested in profiles should bring ALL the relevant requirements to MPEG, even when they think these are already covered in the general requirements.

Real-Time Communications Profile
The three levels of the Real-Time Communications Profile were condensed to one. The Requirements Group feels confident with the current profile definition, and the profile was approved. The requirements for it will develop further, following the technical progress in the Video, Audio and Systems Groups. The requirements were essentially provided by the ITU, but the possibility is left open to include object-based capabilities.

Object based Storage and Retrieval Profile
The Requirements Group was happy to receive a request for an "Object based Storage and Retrieval Profile". This is a drafty, yet very useful start for a new profile. It will need many of MPEG-4's object based capabilities, and it will be further developed. It is very likely that a requirement exists for high quality video and audio. The profile was discussed in the joint meeting with the Video Group. A draft is included in the Profiles Document (WG11 N1394), and it will be further developed in the ad hoc group.


Copyright Issues
Copyright Issues were briefly discussed. They are very likely important in MPEG-4, but how to deal with
them is not yet entirely clear. The discussion will continue in the Ad Hoc Group for MPEG-4 requirements. The goal of this work will be: to derive requirements for copyright protection of MPEG-4 objects and composited scenes. The Requirements Group was glad to receive, through the French National Body, a contribution on copyright issues, and looks forward to meeting with experts during the Brazil meeting to further discuss the issue.


Intra and Still Picture Modes
The Japanese and American National Bodies had requests for good quality intra and still picture modes. The result of the discussions, also derived in the joint meeting with Video::

Annex 5
Systems Meeting Report
Source: O. Avaro, Chairman


1. Architecture

The main results on discussion on architecture ar ethe following :
- MSDL/SNHC agreed to that it will be a great benefit to have one architecture for MPEG- 4.
- The current architecture has been adopted by SNHC to develop their APIs.
- MSDL takes into account SNHC Requirement to support not less that VRML 2.0 capabilities.

This means that the current architecture is adopted. The verification model is build on it. The needed evolution to support VRML 2.0 capabilities will be developed in the AHG on Architecture (see below).

Architecture also discussed the format of what will be standardised. Procedural (language+APIs) or textual format. There were not enough inputs to make this issue progress. This point will therefore be adresses in the architecture AHG.


2. Flexibility

The current approach (language+APIs) seems to be stable and to provide for the requested flexibility.


3. Composition

The description of the composition (currently achieved by the description of the render method) of the AV Objects seeems to be sufficient.

An alternative to a procedural description has been proposed. The current description and the additionnal benefit of the proposal (e.g. easy capability to go through scenes nodes) should be merged in the new architecture. Phil proposed to achieve this in adding new AV objects called nodes. This proposal will be documented an validated in the architecture group.

The link between the composition and multiplex has been discussed. Two possible solutions have been proposed :
- the first one is an implicit association of multiplex streams to AV objects based either on the logical structure on the scenes and the logical structure of the multiplex.
- the second one is based on an explicit association through fixed identification of objects. Similar mechanism is used in DSM-CC.
These two solutions needs more investigation and more technical proposal to allow an educated choice.

4. Decompression

It was decided during this meeting to freeze the decoding flexibility since there is not such clear requirements from other MPEG group for a flexibility under the level of algorithms and above the level of syntax. The feasibility and the technical approach are however well defined, and studied in this area can be continued if needed.

The flexibility at the level of algorithm is achieved through the definition of the interface of decoding process objects. Such definition have been produced and documented in the WD. Same kind of interfaces will be defined for high level tools such as shape decoders, mesh decoders ...

The encapsulation of an audio decoding algorithm have been provided and extends the current approach to Audio algorithms and tools.

The flexibility at the level of the syntax decoding is achieved with MSDL-S (see below).


Syntax decoding

The main issue for syntax decoding was the following :

The integration of the syntax decoding within the architecture has been looked at in clode details. We have therefore now a good understanding of how to use MSDL-S in the VM which mainly resides in :
- The description of the syntax in MSDL-S (e.g. Foo)
- The compilation of the classes in a C++ or JAVA class (e.g Foo)
- The declaration of this syntax class in objects that need to parse
data from the bitstream (parsable object).
- Data will be made available to the parsable objects either by the interface of a get method either at the instanciation of the object.

This close analysis reveals still not solved technical issue, that will be examine in future work such as :
- Storage of the data for syntax objects containing loops.
- Sequential of parsing and decoding operations
- Binary description of the MSDL-S ...

A compiler from MSDL_S to C++ has been provided by Yihan. It will allow for validating and testing and will be the basis of a set of useful tools for other groups (such as syntax checker, compliance testing...).


Multiplex

Extensive discussions were held by the mux guys. The general outputs are :
- A prioritization of interworking requirements between multimedia terminals. The priority being first with MPEG-4 terminals through various networks, then MPEG previous standards, then others.
- The clarification of mux goals. Mux will produce the ability to mux content information streams and to develop the apropriate signaling to configure the mux. Error protection does not substitute network tools but complement them if necessary.
- There is no clear need of an object oriented description of the mux. However, the integration of the mux in the current VM may be facilitated by such an approach. This approach fits also well with mux tools such as error protection or encryption. Further studied are needed to establish the cost/added value of the approach.
- Synchronisation of mixed media types can be achieved using MPEG-2 like mechanisms within the context of VRML 2.0. The evolution of the architecture should define how these mechanism can take place.
- Close collaboration with ITU is wished, since there is still some room to make H.223/A evolve to take into account MPEG-4 requirements. In any case, interworking between the two standards should be facilitated as much as possible.


More information can be provided by Carsten who chaired this mainly parallel meeting.


Signaling

The need of signalling was raised within multiplex activities who forseen first to define how the configuration is achieved. More generally speaking, MPEG-4 has a strong need of signaling. This needs have been drafted and will be forwarded to the DSM-CC group which took the responsibilities of signaling in the previous MPEG standard. A close collaboration of the two groups should be defined.

Concerning multiplex signaling, two existing standards meet the goals DSM-CC and H.245. This last one has currently the preference of mux experts since the coding of the message is efficient and the semantic is closer to multiplex need. In any case, before making any choice, the MSDL group expect expertise and coordination from/with DSM-CC.


Working draft and Verification Model

All these discussions and decisions will be made available in the edition of a working draft and a definition of a complete verification model.


Annex6
Video Meeting Report
Source: T. Sikora, Chairman


The primary focus in the video group was dedicated to the review of Core Experiments and the progression of the MPEG-4 Video Verification Model (VM) to version 4.0. Joint meetings were held with the test group, the requirement group, as well as with the MSDL group to align activities.

The MPEG-2 Multiview Profile was promoted to the status of an Amendment (AM).

Following the results of the Core Experiments or problems detected, a new version 4.0 of the MPEG-4 Video VM (doc. N1380) was released. The main improvements were made for the coding efficiency of Intra frames, the coding efficiency of motion compensated video for very low bit rates, for the computational efficient implementation of the Macroblock Padding procedure for arbitrarily shaped VOP's as well as for error robustness in eror prone environments.

In particular:

* The Chrominance Subsampling description was improved, namely the way chrominance samples are computed in the borders of arbitrary shaped objects.

* The Motion Vectors Difference range was corrected.

* The Padding of arbitrarily shaped VOP's was changed from frame-based padding to Macroblock-based padding.

* Improvde error resilient due to the introduction of an error resilient syntax and error resilience tools, with main changes:
- A flag to enable/disable Error Resilience Mode.
- Byte alignment for the session start code.
- Resynchronisation markers are introduced in a row by row fashion.

* The Intra DC prediction was changed and a DCT-domain AC prediction was introduced to increase coding efficiency in Intra VOP's. In particular:
- At the MB layer a new flag (ACpred_flag) was introduced to enable/disable AC prediction at MB level.

* The VOP formation was changed to increase coding efficiency in arbitrarily shaped VOP's, in particular the computation of the bounding box changed in order to maximize the number of transparent MBs.

* A deblocking filter was introduced in the coding loop to minimize blocking artifacts at very low bit rates. A flag is avaialable in the VOL to disable the filter if required.

* The Temporal Referencing section has now a better description.

* Spatial scalability was revised in order to eliminate incoherences.

* The DBQUANT semantics was revised.

* The Bitstream syntax section starts with improved definitions.

* The description of the Separate mode for I, P, and B VOPs was updated.

Promising techniques were identified for coding efficiency of video. Discussions on additional functionalities took place in this context. In particular the Sprite Coding technique currently investigated in Core Experiments provides significant improvement in terms of coding efficiency for VOP-layered approaches and provides important additional functionality for additional object-based manipulation of video.

Major improvements have been demonstrated for the efficiency of shape coding in various Core Experiments, It is foreseen to adopt major changes for the VM related to shape coding at the Brasil meeting.

Many Core Experiments are continued until the Brasil meeting. It is expected that the number of Core Experiments related to new techniques will reduce in future meetings.


Annex7
SNHC Meeting Report
Source: G. Rajan, Acting Chairman


1. Evaluation of Proposals Received in Response to CFP

Received 1 proposal in Geometry Compression, 7 contributions in the area of Face Animation, 2 relevant to Body Animation, 2 in Text-to-Speech synthesis, and 3 in the area of Media integration.

Apart from evaluating the technical merits and innovations of the contributions, we also decided to work on the integration of these contributions into one architectural framework. For the purpose of developing one right away, the proposed Systems architecture was deemed to be suitable for the moment.

3 areas of work for VM 1.0 were identified, viz., Face and Body Animation, Media Integration of Text and Graphics, Text-to-Speech and its possible interface to Face animation (there was one contribution from AT&T in this particular area).

SNHC agreed to adopt the MSDL architecture for the development of their VM 1.0 realizing that it would be useful to have a common architecture among the two groups.
MSDL and SNHC also agreed that, at the least, VRML 2.0 functionalities should eventually be supported within MPEG4. In that regard, the SNHC group would drive the architecture requirements for the MSDL group and contribute to its evolution via the AHG on Architecture.


2. Work on SNHC VM 1.0

Four main areas were concentrated on for this document: Face and Body parameter description and animation, Media integration of text and graphics, Text-to-Speech synthesis and associated interfaces with face animation.

SNHC audio functionality architecture was included in this document (w1364) for generating discussion and encouraging participation/contributions.

A separate document (w1365) was generated for the enumeration of the face and body {description, animation} parameters.

The text and graphics related functionalities generated a lot of discussion. Added to the mixture was a brief note from ITU-LBC group asking for a joint effort on exactly the same functionalities. At this stage, it was decided to confine the functionality to 2D text and graphics related with further extensions to be explored in the appropriate AHG. As one might notice, a reasonable part of the Media Integration section in the VM document was "borrowed" from the T.126 files.

Although a lot of the APIs are documented in the SNHC VM, they need to be integrated into the Systems architecture in order to ratify their functionalities.


3. New CFP on SNHC Audio

Since we had not received any contributions in the area of SNHC audio, a fresh Call for Proposals in the Integration of Synthetic audio within the MPEG4 framework was issued. Some understood the proposal to be calling for fresh contributions in the areas of synthetic audio coding algorithms, but that was clarified subsequently.

4. Ad Hoc Groups

Five SNHC AHGs were established.

Annex 8
Audio Meeting Report
Source: B. Edler, Acting Chairman

Opening of the meeting
The MPEG/Audio Subgroup meeting was held during the 36th meeting of WG11 in Chicago, USA on Sept 30 to Oct 2, 1996. The list of participants is given in Annex A-I. The acting chairman welcomed the delegates to the meeting and outlined the work for the three days.

Approval of agenda
The agenda as presented in Annex A-II was approved.

Allocation of contributions
All contributions were listed (see Annex A-VI) and allocated to the agenda. All contributions were presented in Audio plenary.

Communications from the Chair
Mr. Edler reported on the Sunday evening Chairman's meeting.

Tampere meeting report
The Audio Subgroup portion of the Tampere meeting report, July 1996, had been previously distributed and its MPEG-4 relevant parts were approved.

Report of ad hoc group activities
The reports of the ad hoc groups (ad hoc group on MPEG-4 audio VM, M1296 - Edler, and ad hoc group on MPEG-4 audio core experiments, M1307 - Brandenburg) were given in the opening plenary. The table of currently available VM modules in M1296 was updated and included in output document N1379 in order to reflect the latest changes.

Disposition on National Body Comments (DoC)

MPEG-4
The input documents listed in Annex A-VI were discussed in the audio plenary. Recommendations were prepared in reaction to the USNB contributions and to M1362. Document M1271 will be taken into consideration for the design of core experiments on error resilience. Three task groups were formed in order to prepare a textual description of the Audio VM as indicated in Annex A-IV. In addition some flex-0 possibilities were summarized. The main results of the task groups were presented to and discussed by the audio plenary and their work resulted in the Audio VM 2.0 document N1378.

Mr. Kaneko reported the status of SNHC audio.

In a joint meeting with most of the other subgroups the important issues of the July 1997 test were discussed.

Preparation of a press statement
Contributions to the meeting press statement were prepared and approved by the Audio Subgroup.

Liaison matters

Discussion of unallocated contributions
A document entitled "Use of 'Mixed Voiced' mode for 2.0 kbps parametric core of the VM" which was not registered as an input document was presented by Mr. Nishiguchi.

Recommendations for final plenary
A list of recommendations was prepared for approval at the final MPEG plenary meeting. The following ad-hoc groups were established:
a) Ad Hoc Group on SNHC Audio, N1359 - Kaneko
b) Ad Hoc Group on MPEG-4 Audio WD Editing and VM Software Implementation, N1376 - Grill
c) Ad Hoc Group on Core Experiments for MPEG-4 Audio, N1377 - Brandenburg

The output documents given in Annex A-VII were produced by the Audio Subgroup.

Agenda for next meeting
The agenda for the MPEG Audio Subgroup meeting in November '96 in Maceio, Brazil was already approved during the meeting in July '96 in Tampere (see Annex A-III).

A.O.B.

Closing of the meeting



Annex A-I
36th MPEG/Audio Chicago Meeting Participant List
(Sept/Oct. 1996)

Name Country Affiliation e-mail address
Akagiri, K. J Sony ken@av.crl.sony.co.jp
Bosi, M. USA Dolby Laboratories mab@dolby.com
Brandenburg, K. DE FhG - IIS bdg@iis.fhg.de
Burns, R. USA Hughes rburns@hitchcock.dcf.scg.hac.com
Coleman, M. USA FiveBats mc@fivebats.com
Edler, B. DE University of Hannover edler@tnt.uni-hannover.de
Grill, B. DE University of Erlangen grl@lte.e-technik.uni-erlangen.de
Hotani, S. J NTTDoCoMo mpeg4@mlab.nttdocomo.co.jp
Iijima, K. J Sony iijima@pcrd.sony.co.jp
Iwakami, N. J NTT iwakami@splab.hil.ntt.jp
Johnston, J. USA Rockwell james.johnston@nb.rockwell.com
Kleijn, W. B. USA AT&T bastiaan@speech.kth.se
Koike, T. J Sony koike@av.crl.sony.co.jp
Kroon, P. USA Bell Laboratories kroon@research.bell-labs.com
Lindqvist, M. S Ericsson morgan.lindqvist@era-t.ericsson.se
Lueck, C. USA TI lueck@hc.ti.com
Mainard, L FR CCETT lmainard@ccett.fr
Matsumoto, J. J Sony jun@pcrd.sony.co.jp
Miki, T. J NTT DoCoMo miki@mlab.nttdocomo.co.jp
Moriya, T. J NTT moriya@splab.hil.ntt.jp
Nishiguchi, M. J Sony nishi@pcrd.sony.co.jp
Nomura, T. J NEC sc29a@dsp.cl.nec.co.jp
Okuda, Y. J Toshiba okuda@cns.clab.toshiba.co.jp
Oomen, W. NL Philips oomena@prl.philips.nl
Purnhagen, H. DE University of Hannover purnhage@tnt.uni-hannover.de
Schnurr, O. USA Motorola schnurr@ukraine.corp.mot.com
Spille, J. DE Thomson Multimedia spillej@tcernd1.hanover.tce.de
Su, H. USA Rockwell hysu@nb.rockwell.com
Sullivan, G. USA PictureTel garys@pictel.com
Tan, A.-P. RS Panasonic Singapore Labs aptan@psl.com.sg
Tanaka, N. J Matsushita natanaka@telecom.mci.mei.co.jp
Taori, R. NL Philips Research taori@natlab.research.philips.com
Thi, J. USA Rockwell jimthi@nb.rockwell.com
Vaananen, M. FIN Nokia Res. Center mauri.vaananen@research.nokia.com
Yao, J. USA Fujitsu Labs of America jyao@fla.fujitsu.com



Annex A-II
Agenda for the 36th MPEG/Audio Subgroup Meeting in Chicago, September 1996


I. Opening of the meeting
II. Approval of agenda
III. Allocation of contributions
IV. Communications from the Chairman
V. Tampere meeting report
VI. Report of ad hoc group activities
VII. Disposition of National Body Comments (DoC)
VIII. MPEG-4 IX. Preparation of a press statement M0883, (N1249)
X. Liaison matters
XI. Discussion of unallocated Contributions
XII. Recommendations for final plenary
XIII. Agenda for next meeting
XIV. A.O.B.
XV. Closing of the meeting



Annex A-III
Agenda for the 37th MPEG/Audio Subgroup Meeting in Maceio, Alagoas, Brazil, November 1996

I. Opening of the meeting
II. Approval of agenda
III. Allocation of contributions
IV. Communications from the Chairman
V. Dallas meeting report
VI. Report of ad hoc group activities
VII. Resolution of National Body comments
VIII. MPEG-2 BC IX. MPEG-2 NBC X. MPEG-4 XI. Preparation of a press statement
XII. Liaison matters
XIII. Discussion of unallocated Contributions
XIV. Recommendations for final plenary
XV. Agenda for next meeting
XVI. A.O.B.
XVII. Closing of the meeting



Annex A-IV
Audio Task Groups

T/F tool description - Brandenburg
Bosi Brandenburg Iwakami
Koike Kroon Lindqvist
Lueck Mainard Moriya
Oomen Schnurr Thi
LPC tool description - N.N.
Kroon Lindqvist Nomura
Su Tan Tanaka
Taori
Parametric tool description - Nishiguchi
Iijima Kroon Matsumoto
Nishiguchi Purnhagen Su



Annex A-VI
Input Documents

No. Group Title Source
1358 HoD USNB Contribution - Audio F. Whittington et al.
1359 HoD USNB Contribution - Use of Available Standards F. Whittington et al.
1271 MPEG-4 Consideration on Core Experiments of Error Resilience aspects in MPEG-Audio S. Hotani et al.
1296 MPEG-4 Report of the Ad-hoc Group on MPEG-4 Audio VM B. Edler
1298 MPEG-4 Updated MPEG-4 Audio VM Description B. Edler
1299 MPEG-4 Submission of MPEG-2 NBC Decoder to MPEG-4 Audio VM S. R. Quackenbush
1307 MPEG-4 Report of the Ad Hoc Group on MPEG-4 Audio Core Experiments K. Brandenburg
1324 MPEG-4 Speech/Audio Coding for Multimedia Terminals and Multi-Networks A. Crossman et al.
1325 MPEG-4 Overview of the PictureTel Transform Codec (PTC) A. Crossman et al.
1202 MPEG-4 Technical description of VM T/F coder based on LPC and VQ N. Iwakami et al.
1166 MPEG-4 Use of alternative frame lengths for MPEG-2 NBC Audio Coding in MPEG-4 B. Grill
1228 MPEG-4 A report on the HVXC parametric core for MPEG-4 Audio VM M. Nishiguchi et al.
1362 MPEG-4 Use of complete speech coding algorithms in MPEG-4 Audio VM J.-M. Müller et al.



Annex A-VII
Output Documents
No. Authors Title
N1378 Audio Subgroup MPEG-4 Audio Verification Model 2.0
N1379 Edler MPEG-4 Audio Software Library Overview
N1398 Moriya Prescreening Listening Test Procedure for Core Experiments of MPEG-4 Audio



Annex 9
Test Meeting Report

Source: Laura Contin, Chairman

Introduction

The MPEG Test Subgroup met in Chicago during the 36th meeting of WG11.
The following items were addressed:
1. Results of the verification tests on MPEG-2 Multiview profile
2. Test procedures to be used in July '97 tests.


MPEG-2 Multiview profile tests

The results of tests carried out on stereo sequences coded with the MPEG-2 multiview profile (ISO/IEC 13818-2/AM3) were presented and discussed. The tests were carried out at three different test sites located in Japan (NHK), Germany (HHI) and Canada (CRC). Taking into account the different equipment used for displaying the sequences, a considerable consistency among the test sites was observed.

From the results it can be concluded that generally speaking, at the tested bit rates, viewers did not perceive too annoying coding artifacts. Details about test procedures, laboratory set up and experimental results can be found in document WG11/N1373. This concludes the activities on MPEG-2 multiview profile.

July '97 tests

Representatives of all the MPEG subgroups participated in a meeting to discuss goals and experimental conditions for the MPEG-4 tests scheduled by July '97.

These tests will be aimed on the one hand at comparing audio and video VMs with both existing standards and new emerging technology and on the other hand at checking the status of the VMs against the requirements for the standard. In other words, the purposes of July '97 tests should be both competition among VM and new proposals and verification of the standard itself.

Concerning the competition tests, the assessment methods and procedures will be basically those already used in the previous audio and video tests, a part from some exceptions as the introduction of the Double Stimulus Continuous Quality Evaluation (DSCQE) for testing error robustness. Document WG11/N1374 provides a preliminary description of the test methods and procedures to be used in the July '97 tests. The test subgroup has asked the support of audio and video experts to revise this document, in particular concerning the definition of testing conditions (e.g. source material, pre-processing, coding parameters, error conditions, etc.). The following individuals will coordinate the revision of particular sections of the document:
ResponsibleSections of doc. WG11/N1374 to be revised
B. EdlerSection 2 - Audio tests
J. MullerSection 3.3, 3.4, 3.5.1 - Video compression tests
M. FraterSection 3.3, 3.4, 3.5.2 - Video error robustness tests
J. OstermanSection 3.3, 3.4, 3.5.3 - Video content-based interactivity tests

A revised version of the document will be prepared by next MPEG meeting.

Concerning the verification tests, several possible evaluations have been taken into account. Test subgroup proposed audiovisual tests, Implementation subgroup made a proposal to evaluate graceful degradation and SNHC proposed tests to compare MPEG-2 and MPEG-4 on text overlay. Test subgroup also suggested task-based tests to evaluate facial animation performance (a possible task is for example the recognition of the emotions). If real-time decoder will be available by July '97, interactive tests, on audio, video, MSDL and maybe also SNHC could be carried out. For the verification tests, it would be advisable to focus the attention on particular profiles and levels and tailor the tests on a representative application of such profiles/levels. More thoughts are needed on the verification tests and suggestions from MSDL, implementation and requirements subgroup are expected.

The last, but very important point discussed was the need of new sequences.

The following material is absolutely needed:
1. audio-visual sequences lasting at least 10 seconds.
2. (audio-)visual sequences lasting at least 2 minutes.

Moreover, to fairly evaluate codecs' performance, it would be advisable to use new audio/speech and video material. This because the codecs under test will be likely tuned on actually available source material.

Test subgroup has invited all the MPEG members to bring in Maceio any kind of material that could be used for July '97 tests. A material screening section will be arranged during next meeting.



Annex 10
Implementation Studies Meeting Report

Source: Paul Fellows, Chairman

Introduction
The meeting had a broader attendance in terms of expertise and application profiles and benefited as a result. As there were a number of new members to both the group and MPEG-4, some time was spent describing the work of the group in the past and the general approach that had been taken.

This was the first meeting that the ISG for MPEG-4 was able to deliver some concrete results in terns of complexity assessment of the Video VM. The chairman would like to thank in particular :- Simon Winder, Peter Kuhn, Franck Mamelet and Jean Gobert for the excellent work carried out before the meeting.

Using a performance orientated ANSI C implementation of the video verification model (VM2.2) produced by ACTS EMPHASIS, a tenfold increase in performance has been achieved against the existing software implementations of the VM. Detailed profiling of this software identified the key performance critical components of the standard. Based upon this information, the video and implementation studies groups jointly started an activity to reduce the complexity of the identified modules. There is still considerable scope to further improve the performance of the software and then in the future to include platform-dependent optimizations. The exercise will be repeated at some later date when a more stable version of the Video VM becomes available. i.e. the group will not provide results for each MPEG meeting.

Another key activity performed by the implementation studies group was the identification of ways to gracefully degrade the computational complexity under conditions of high processing demands from either MPEG-4 itself or other co-existent applications, for instance when the decoder runs in software on a personal computer. These techniques, if proven feasible, will lead to increased service availability to the user.

During the meeting the following implementation documents were reviewed :-

1192 Marco Mattavelli Report of the Ad-hoc group on computational graceful degradation
1193 Marco Mattavelli, Sylvain Brunetton Measures of the range of computational based scalability
1199 Keith Kenemer, Dmitriy Korchev, Michael Zeug Complexity Analysis of the Decoder used in the P5 Core Experiment
1257 Peter Kuhn Complexity Analysis of the MPEG-4 Video Verification Model Decoder
1258 Peter Kuhn AHG Report on Video Verification Model Complexity Assessment
1288 Gilles PRIVAT, Ivan LE HIN Hardware evaluation of shape decoding APIs

Graceful degradation.
Many issues remain open for contributions and/or discussion: A joint meeting between Test, Requirements and Implementation was held to discuss including graceful degradation into the July 97 Tests. Further work will continue on this subject and will be integrated within an Implementation Model (IM) of the Verification Model (VM).

Complexity Analysis of MPEG-4 Video Verification Model Decoder
Document M1257.doc describes the experimental conditions and the results of the complexity analysis. The results that are of most interest are provided below. The ley issue identified was that the alpha padding technique described for VM2.2 consumed 43% of the computation time as well as by far the largest amount of Memory bandwidth.

An instruction level profiling of an speed optimized VM 2.2 decoder (ACTS EMPHASIS) was performed, showing that QCIF realtime decoding of 4 VOPs (sequence coastguard) is possible on current Pentium and Ultrasparc architectures. The results show, that 280 RISC-MIPS and 298 MByte/s memory (i.e. cache) access bandwidth is required for the above mentioned scenario. It can be seen also, that clear written, flexible, extensible but not speed optimized code (e.g. ACTS Momusys VM) is not suited for implementation complexity analysis (but extermely valusable for other core experiments).

The results show also, that the CPU instructions used by a software implementation on a real system consist of about one third arithmetic instructions, one third memory access instructions and about 25 % control instructions.

Distribution of Instruction Usage and Memory Bandwith Usage


Sequence coastguard, 4 Vops, 30 fps, 10s, QCIF, Ultrasparc

% of time calls (iprof = gprof) Mega Instructions per second (iprof) Memory Bandwith (MByte/s) (iprof)
Function gprof
Arith. Control Memory Sum (incl. other)
pad_alpha 43.17 3588 55 42 69 191 215
internal_mcount 17.18 (library function used by gprof only)
add_one_vop_alpha 9.26 1200 10 0.3 9.7 21 18
idct 5.76 84613 5.7 0.4 3.0 9.6 7.4
decode_binary_shape 4.23 49125 3.2 1.2 2.8 8.5 7.8
interp_2h 2.34 109936 2.6 0.1 1.7 4.6 2.2
render_inter_texture 2.16 84082 1.5 0.1 2.8 4.5 5.7
get_macroblock_texture 2.16 43434 1.2 1 1.7 5.4 5.5
interp_c 1.44 99251 0.5 0.1 1.4 2.0 1.9
pad_noalpha_umv 1.26 3588 0.8 1 1.0 2.1 3.6
decode_level_0_to_2 1.17 49125 1.3 0.7 1.3 3.9 3.6
showbits 1.08 864096 2.0 0.7 1.7 5.0 4.4
interp_4 0.90 20782 0.8 0.02 0.5 1.3 0.6
test_and_swap 0.63 221976 0.4 0.3 0.6 1.5 1.4
get_coded_prediction 0.63 43280 0.4 0.3 0.8 1.8 2.4
mcount 0.63 (library function used by gprof only)
clear_viewport 0.54 301 0.2 0.1 0.4 0.8 1.6
get_non_obmc_block 0.45 142821 0.2 0.2 0.4 0.9 1.1
fill_16x16 0.45 22623 0.151 0.04 0.6 0.9 0.8
flushbits 0.36 862895 0.4 0.3 0.7 1.8 2.9
get_TCOEF 0.36 229038 0.6 0.3 0.6 1.8 1.6
fill_4x4 0.27 290684 0.1 0.06 0.5 0.7 0.6

The data stated above are time and instructions spent in the listed functions. Subfunctions are accounted seperately and are not included in the statistics of their calling functions. GNU gprof delivered sampled function execution times and calls and iprof delivered exact function call numbers and exact instruction usage statistics.

Simulation time
With gcc 2.7.2 the uninstrumented decoder runtime on Ultrasparc 167Mhz was 15 seconds (20 fps) for writing on the X11 display and 10s for writing into memory (30 fps). Note that however, the figures are for QCIF and that there is still a long way to go to Standard definition TV resolution.

Complexity Analysis of the Decoder used in the P5 Core Experiment

The results of a preliminary coomplexity analysis of the Iterated Systems decoder used for Core experiment P5 was reviewd (M1199.DOC). The analysis presented indicated that this scheme is a particularly inexpensive scheme for implementation and that using a 166Mhz Pentium (implementation was optimised for this platform), real time performance is possible.

Execution Times

The data presented below shows the measured execution times of the complete decoder (including unpacking and color conversion) on a 166 MHz Pentium.
Bit Rate (bps)Frame Rate (fps)
26K80.88
52K76.24
95K68.64
Table 1: Decoder performance vs. bit rate at QCIF resolution
Bit Rate (bps)Frame Rate (fps)
24K20.61
108K21.13
162K20.47
994K17.04
Table 2: Decoder performance vs. bit rate at CIF resolution

The decoding algorithm is a very simple paste function with an intensity adjustment. No floating point variables are required. The greatest memory usage involves storing the current and previous frames. Increases in computational power requirements are directly proportional to image size. Profiling of the decoder showed that the computational complexity of the paste function in the decoder is significantly lower than the computational complexity of YUV to RGB color conversion.

Hardware evaluation of shape decoding APIs

Doecument M1288.DOC discussed the merits of formalising the API interfaces between the decoder and a compositor tool. The following benefits were cited :- The contribution studied a set of shape representations which could be used to isolate higher-level 2D object descriptions from pixel-level back-end rendering/compositing operations. Rather than a single interface, these descriptions provide a consistent set of intermediate representations which could go from the higher to the lower levels, from contour/skeleton to binary masks and addressing patterns. Either a traditional processor-memory or an associative-processing/logic-enhanced memory model can be used to support the lowest levels of these representations in hardware, whereas higher level representations could be software-converted. In both cases, the inclusion of these representations in standardized APIs makes it possible to leverage all capabilities of the underlying hardware while maintaining cross-platform interoperability.



The idea is that a shape representation included in an API is a pivotal representation to which others are converted before they access hardware resources. To date, the only equivalent to this for video is the raster-scan format, which unifies other formats by their lowest common denominator and precludes parallel processing.


Annex 10
Liaison meeting report


The Liaison group considered input documents

SC29/N1744 from ITU-T SG15 on video and audio issues. Most of this document was discussed in Tampere, where a reply WG11 N1305 was produced.

SC29 N1730 from SC21 on progress in ASN.1. The document was distributed to the MSDL Mux group.

SC29/N1746 from IEC/TC100 was noted. No action was taken.

SC29/N1749 from DAVIC was discussed in Tampere. No further action.

MPEG96/1171 from ITU-T LBC group regarding text and graphics overlays was discussed. SNHC will provide these features.

MPEG96/1172 from ITU-T LBC group regarding cooperation and consideration of existing ITU-T standards.

MPEG96/1173 from ITU-T LBC group regarding MSDL-M specification. MSDL decided to try to propose changes to H.223A for its needs.

WG11/N1368 was produce for sending to ITU-T SG15 on the subjects of overlays and cooperation.

WG11/N1369 was produce for sending to ITU-T SG15 on the subject of multiplexing.

Karlheinz Brandenburg was approved as temporary liaison to ITU-R WP 10C.
Philip Chou and Ganesh Rajan were approved as liaisons to VRML.